We've already talked through how to build your team, get your processes and terminology straight, and how to structure cloud accounts. Now, we've arrived at application development.
This is 'all about the app' and there's a lot to unpack, so we're splitting this mega-topic into a few chunks over the next few episodes. Stay tuned!
Hello and welcome to Cloud Unplugged the podcast where we take a lighthearted honest look at the who, what, when, where, why, how and omgs of cloud computing. In today's episode, building applications for the cloud, you've got your team, you've got your cloud. Now it's time for the apps. So as always, I am Joel Parks.
I am Jon Shanks.
And, Jon, do you have anything that you'd like to share with the community?
Not really, Joel, No I don't really feel...
Nothing major happened recently?
Maybe something? Maybe I turned 40. I don't know. I mean ... do we have to bring it up?
Happy birthday, Jon!
Thank you very much. It's not a year you choose to celebrate. Let's be honest. But it's there, anyway.
Been there, done that, man. It's all good. I'm glad that you had a happy birthday. And for those of you that are following along, we had last week off. I shared a bonus interview with Alicia Davis that I hope you all enjoyed. That was primarily because Jon was taking a very well-deserved break to celebrate his birthday with family and friends. So you know, congratulations. and best wishes for Jon. So with that, let's just jump straight into the news, shall we?
Okay, this week, we've got a couple of really interesting articles. The first one is 'Fastly goes boom'. So Fastly, if you're not aware, is a CDN provider that actually fronts a surprising number of websites, as we found out this week. So on Tuesday, Fastly a number of websites across the internet started throwing 503s. And it was because Fastly succumbed to what appears to be a configuration error of undefined scope or definition. So what they're saying...
Not just a configuration error, but a valid configuration change
Yeah, so this is where this gets a little interesting. So what they're saying is that, you know, there was a configuration applied by one of their customers, which is what's super interesting about this, and that configuration change interacted with a patch that they applied to some of their systems on May 12. And in a cascade, and then it caused a cascade effect that took down big, you know, took down Fastly. Now, I find that interesting on many, many levels. But the biggest one is that one customer's configuration could have cascaded that badly, to take the whole thing offline.
Right is incredible. It was pretty epic, wasn't it? Because we saw so much go down. I mean, even we witnessed it at the time. So it's, yeah, pretty impressive.
It happened. You know, Americans didn't notice it as much as I think Europe did, because it happens kind of during the overnight hours for a majority of the US. But Europe and the UK definitely noticed. And a lot of large web properties were affected, even ones that might you might not think would be affected by this. certain portions of amazon.com were affected. So somewhere within Amazon, they're actually using Fastly, and have stepped outside of their own product suite, which I don't want to be I wouldn't want to be on the receiving end of that discussion inside of Amazon. But more than that, I mean, we'll just run the quick list. I mean, Reddit, Spotify, eBay, Twitch, Pinterest, parts of the UK Government, from what I understand, right, Jon?
I mean, you know, let's not ignore the most important thing, the emojis on Twitter. I mean, you listed those, but let's focus on the most important, you know, no emojis.
Yes, Twitter was temporarily emotionless.
Still full of bad, you know, sort of like bad comments and evil thoughts, but temporarily faceless? Yeah, so I thought that that was really interesting there. I think we haven't heard the end of this, because what they put out as their root cause analysis, I can't imagine being satisfactory to a majority of facile customers. So I don't think we've heard the end of this. Well, we'll see where this goes. The second news article today is in a sort of related vein, but going down a different path here. So it does have to do with availability and some security, but this one's more specific, because it's talking about Windows containers, and specifically Windows containers running on Kubernetes. So Kubernetes is a topic that we have not talked about much on this podcast yet. We've sort of talked around it and we're gonna get to it a bit later on. But Windows containers for those of you that don't know are a relatively new thing.
A majority of containerized applications are actually Linux containers. Windows containers came along much much later on and are a relatively new thing on the scene. And running Windows containers on Kubernetes, by extension is also relatively new. What we have here isn't is an article talking about a vulnerability called silo scape. That is, effectively it's been dubbed by the security researcher that founded Daniel Prismic. From unit 42, he called it siloscape which he pronounces silo escape. Effectively, it's a malware that pries open known vulnerabilities in web services and databases, so as to compromise Kubernetes nodes and backdoor into the clusters. So the attack vector would be compromised the service that's running within the windows container to get to the node and ultimately escalate up to the cluster or the management plane itself. In Kubernetes, speak if you didn't follow all that, just stay tuned. We'll dive into this more later. But just know, that's bad. That's pretty, that's a pretty massive compromise of especially a containerized system.
Yeah, I think though, if Kubernetes is configured properly, it doesn't happen. You can't exploit if you do a good job of making sure you harden Kubernetes is you can't exploit it. And I think there's something like it's some like impersonation, some threat impersonation that it can do. That's like really undocumented somewhere in Windows or something or other. And you can kind of do all this in person.
Yeah I think you're right. And I think it gets down to one aspect of Kubernetes. And again, this is for a topic for another conversation. But you know, Kubernetes has so many moving parts and can be so daunting to configure properly, that the number of Kubernetes implementations, you know, is worldwide that have loose security and have not been locked down to sufficiently defend against these types of threats. It's just like the numbers massive, right? It's probably the majority, frankly, so. So this is just a heads up, if you are one of the users out there that is running Windows containers on top of Kubernetes, you may want to go dig into this and find out if you are susceptible to, this exploit. So that is the news.
And now we're going to get directly into our subject for today, which is application development. So I teased it at the beginning, we've talked a lot about building the team, getting the processes, right, getting your terminology straight, getting everybody galvanised behind some common goals and priorities, getting started with your cloud accounts, making sure that they're structured in a way that's sustainable, that, you know, helps you mitigate some of the risk of you know, security vulnerabilities, but also making sure that you can report and act on your systems long term. And we've arrived at the point where it's now all about the app, right? We have everything in place, we've paved the road to the cloud environment, and now what is it we're going to put in there? And how are we going to build the thing that goes into the cloud environment? And so we thought it was probably a good idea to break this into chunks so that we're not throwing, you know, some extremely long podcast at you. And so the first half of this conversation is this episode, and it's really going to be talking about the basics. You know, we're gonna go back to the beginning and talk about it in an application development-specific sense. What is it that you need to have in place or what processes or practices, skillsets, technologies need to be there in order for you to set yourself up for success in being able to build applications that are ready to run in the cloud?
You think this is primarily about building, not migrating, right? So this is not what this is all about? How do you iterate something from nothing, you know, into the cloud, isn't it?
Yeah, migration is a different consideration. Because if you're just taking a workload, you're not making any modifications to it, and you're moving it from your existing private data centre, and you're going to relocate it to your new cloud tenant. That's a different conversation. And also, that subject has been talked to death. Right. And so if you want to know more about that there are great resources that talk about cloud migration. We're talking about developing applications for the cloud, which in our, or at least my point of view, I think, is the thing that's discussed less because a lot of this stuff tends to focus around the operational concerns and how it works and applying controls and less on practically, how do you build something that's going to run well in the cloud. That's going to take advantage of what the cloud can give you, and lets you build things in a new, better, more sustainable way.
So from there, I think we need to define one big term. And this term gets used a lot. We have actually used it already on the podcast. And it's agile. So especially in an application development context, people talk about agile software development. If you look at job postings, it's all over the place, like people are hiring for people that you know, know that know how to work in an agile software development environment. So what does that actually mean? Really, agile speaks to the way that you define work and define what you're going to build. Right. So if you go back to the Agile Manifesto, which is where all this came from, and it was a group of software developers and managers that met, I believe, in back around 2001, and sat down and wrote the Agile Manifesto, it's published online, their definition of Agile really breaks down into four things. And it's four statements that affect not only the way that teams work but also the thought process that goes behind how you design and develop applications. The first one is individuals and interactions over processes and tools. And this is really a response to, you know, it's not always about the tooling. It's about getting people to think and work together in a different way, which we've talked a ton about, right?
The next one is working software over comprehensive documentation. This speaks directly to old waterfall practices of you know, making sure that you know, nothing can go live until the documentation is absolutely complete. Well, sometimes it's an experiment. Sometimes you just need to get it out there and get it running, so that you can decide if it's even something you want to keep, right?
Yeah, exactly. Or even show it to somebody quickly to get feedback on it before you go any further and wasting time with it. Like that's exactly not what I wanted. Right? There's no point in talking about something someone didn't want.
Yeah, I mean, exactly. So working software over comprehensive documentation. That's, that's been interpreted over the years, by some people saying, you know, documentation is a waste of time. Okay? Don't be silly and overly literal. That's not the spirit in which this was intended. It's just meaning, you know, getting the thing working so that you can start to evaluate it and make decisions about what you've built is more important than the documentation. All right. And the third one is customer collaboration over contract negotiation. Meaning if you try and prescribe a very rigid process and rigid way of working with your customers, rather than engaging in a very open collaborative discussion with them, you often miss the mark. And you miss the mark in ways that don't really lead to satisfaction on either end of this equation, whether it's the team that's building the software, or the team that's ultimately consuming it. Right. So what we're really speaking to here is collaborative ways of working both inside the team and with your customers. Because your customers are ultimately the ones that are going to tell you, in a real sense, whether what you're doing is right or wrong.
And then fourth, responding to change over following a plan. And this is a big one. You know, it's not to say, and again, this is like some overly literal people have interpreted this over the years as like, you know, don't plan ahead, you just act in the moment. Well okay, sort of, I mean, what we're talking about here is making sure that you have, you're responsive to changes in input and changes that, or things that happen around your development process that might influence the direction that you take. Waterfall processes are notorious for having incredibly long delivery and development cycles, that sort of precludes inputs coming in from the outside that influence changes in the environment, changes in the requirements, changes in scope or direction. Once you're on a path, you're locked, and if you're locked for a year, well, a lot can happen in a year. And what you end up delivering at the end of that year may be wildly off the target, you know, the world could have moved on. For the sake of example, let's say you were building a tool and you started development in December of 2019. And you're locked into a delivery cycle for a year. Some stuff happened, right? If you don't have the ability to absorb that change and respond to those events, then, you know, you're gonna deliver something that may at the end of the cycle, have no real value.
Yeah, and a plan. You know, I guess this is going back to years ago when people would spend forever designing architecture. Getting the architecture approved doing all this, right? That was like, you know, and then you don't even engineer anything, you're still engineered nothing and months have passed, right? That's like, that's what it means by the plan is like that over-analysis. Paralysis analysis in some ways before even trying to see like, let's do small increment chunks, let's quickly produce something, let's get a prototype together, let's go and test something out to see if this is what people want. That's being agile. Saying that architecting for change is also quite important. Because if you've architected and then you can't change very quickly, because of the architecture, there is a bit of a balancing act, right?
So yeah, I mean on one level, hey, I'm just gonna, we're gonna have a private chat with all the architects out there. Guys, there's only one thing... guys and girls, there's only one thing that is a true constant in every design, or anything you're ever going to build. And that's that stuff is going to change. Like, it's just, that's, that's a hard, fast rule of life. So if you're not, if the thing that you're building is so rigid that it can't accommodate changes in direction or changes in requirements, what you built is not sustainable.
It's not realistic in any sense. So that is just, uh, that I think is a given. You know, and in terms of, the long development cycles I did, I was really fortunate to have done some work in Japan. This is probably a decade ago. And while it's really not the case anymore, historically, there was within some large Japanese corporations, there was this concept of 100-year plan. That's not a joke. And, you know, they would have very, very detailed 100-year plans. And I think I think most of them have realised I think the world has sort of evolved, and that's no longer a thing, but historically
100-year plan. One hundred.
100 years of plans.
One followed by two zeros, yes.
That is insane. I've never heard of that before. Like I've heard of some plans. Maybe like a year, maybe like a five-year plan. But 100 years? I've never come across that.
Yeah. Yeah. So again, historically, it was a thing. I don't think it really exists anymore. But it sort of speaks to the change in thinking that's occurred over the years of recognising that the world moves at a far different pace than it did 50 years ago. And, you know, if you're just taking practices that worked contextually 50 years ago, and trying to reapply them now. I don't, especially when it comes to dealing with change. It's not going to work because the world is different. The rate of change is vastly different. The rate I mean, the rate of change is vastly different from when I entered the workforce, you know. If I'm honest, you know, things have sped and just continue to increase in how fast things iterate and change.
I mean, look at our news. Every day there's news, new services, new cloud services, you know, things happen. I mean, we've been like these things. Yeah, like outages or like vulnerabilities. You know, we could every podcast episode, we could talk about new services, probably that have just either gone into GA, new ones in alpha or beta. You know, there's like, something in the cloud-native foundation.
There were three of them in the news feed this last week that I just skipped over because I thought the Fastly thing was more interesting.
I yeah, I mean, so all that to say, agile is really a way for...it's a structure, it's a mindset, it's a way of thinking about how you go about defining, shaping and ultimately building the thing that you're going to deliver to cloud, you know. And when you think back in context, what we talked about of how the team should be structured, how we should be gathering requirements, how we should be iteratively working through the considerations and the different questions, and design questions, what we're really talking about is following an agile methodology for doing that. Gathering requirements is a giant thing that is something that a lot of teams take a little while to get used to because they're used to having requirements just land in front of them from on high, right, seemingly from nowhere, right. And it's just: go do this thing. And it's an adjustment for an organisation to say, No, this is going to be a very 360 collaborative process where we're going to capture shape, refine, and then begin to execute against a set of requirements.
Yeah, there are also different things. Obviously, there's Scrum. There's Kanban in that. So obviously, there are certain specific methodologies. You know, like, are we iterating in one-week sprints? Do we want to see results in a week? What's the velocity of the team? How can we do retrospectives to make sure the team's operating in the right ways? Are things we need to change as a team? So that is, you know, all those kind of methodologies that surround being agile as well which is more detail around those I guess, outside of this kind of podcast. People can obviously go read about those.
Yeah. Another space. I mean, you can read about this. Yeah, yeah, you can read about this a tonne. There are lots of really, really great books out there that talk about agile ways of working in different tools like Jon was talking about for, you know, tracking and collaborating in this flow. But, you know, really, the team should have already been getting some practice at this. And when we talk about building the applications, now, the application development teams, which are probably more in a passive mode, with some of the things that we've talked about up to now, now really start to take a much more responsible or a role that's much more to the four, when we start talking about building the applications, shaping them and moving them closer to a production environment.
Yeah and I think also, what's important is, is with this is there's two bits to this. One: the development teams iterating the applications based on the requirements, the other one is trying to send, check the value, you're really trying to drive with the product overall. It's like, I've got a vision of the product, we need to assess whether people want this huge, I mean, am I really solving the problem in the right way? So the quicker you get the answer, the better, obviously. So that's were moving fast, helps you know whether the ambition of the product is in the right place to begin with. And then obviously, all the methodologies that surround it are all about helping you iterate quickly into that place to know that you're building the right thing to begin with. And that's kind of really the sense of it all. So it's kind of product-led, in some ways, and then you then have the teams and how they're working underneath.
Yeah, absolutely. I mean, this should be a relatively, you know, it may start a little slow. But by the time everyone gets accustomed to this flow, it should move relatively quickly. And you talked about, you know, what's the length of a sprint? Is it a week is it two weeks? I would say, you know, resist the urge to go any longer than two weeks, especially early on, because there's a lot that you just don't know yet. And your team is going to learn together. And give yourself frequent inflection points, like bring things to a close, end the sprint and have a retro so that you can share the learning and use that to inform going forward.
You know, you just mentioned gathering requirements as well, Joel. What's your view on business analysts gathering requirements versus like the team being part of, you know, proper user research versus BA style approaches.
I think, what we think of as ba now in a lot of contexts, or in a traditional context of, you know, that sort of middleman between, you know, the developers and you know, the people who are driving the business or sort of defining the direction is not particularly doesn't particularly work. It's, you know, those BAs become bottlenecks. And they become sort of interpreters.
And they're domain experts a lot of the time on the problem space you're trying to solve. So they kind of come at it from a very like, you know, when they're gathering the requirements, they literally are just gathering requirements, but they're not necessarily asking the right questions, you know, of the people that gather requirements from to help you shape the products.
I think even more than that, oftentimes, they're not setting appropriate expectations, both up and down the org chart, right. So because they don't have that domain expertise, they can't necessarily filter or sort of handle expectations. And it leads to a lot of unnecessary frustration. Because, you know, from a developer side, an expectation has been set upstream, that something that is relatively hard to solve, can be solved very easily. So now they're in a position of having to do something that isn't, it doesn't necessarily, it doesn't match reality, right? If for some reason that slips, now the person who was told, oh, this is really, really easy, is set up to be disappointed, because something didn't happen in as simple or time-efficient way as they were led to believe. So I think that is a huge disconnect and can lead to a lot of problems with perception and frustration within an organisation.
Yeah, there's no substitute for the discipline of user research, because it's got a lot of psychology behind it, like they know how to word the questions properly. So you remove kind of like confirmation bias, and, you know, so there's you can do it properly, really.
Yeah, well, it's, it's like that old game telephone. When you were a kid where somebody whispers something into one person's ear, and then they go whispered into somebody else's here and it goes around the circle. And then you see, like, you know, the first thing that got whispered in the person's ear was, 'I would like a ham sandwich' and what comes out the other end is, 'there's an alien in my backyard'. Right? It's just like...
I don't know what games go on in the US, Joel!
Like the message gets mangled through too many people, you know, it's like. Well, you know, if you're in Roswell that could be a little interpretation.
This does explain all the UFO sightings that go on over there doesn't it? It all started with telephone. That's where it all began!
But but the you know, the message gets mangled, the more people that goes through. And you know, if you, if you have everybody in the room with different perspectives all hearing the same message firsthand, the chances of that message getting mangled are much, much, much less. And that's I've observed this pattern a tonne where somebody in the business tells a manager, the manager tells a BA, the BA tells the dev manager, the dev manager tells the developer and by the time it goes through that that little track, the thing that the person in the business originally asked for has now been mangled so badly that the scope is wrong. There's constraints that probably shouldn't be there. You know what I mean? Like the business was probably asking for something simple. And by the end, it's a it's a three month project.
So that's another thing to be aware of is getting people all in the same room, it just cuts out a lot of this noise and a lot of the confusion. It may be weird for people that aren't used to it. The first couple of times, you have executives sitting in the room, same room as developers directly talking about requirements, and everybody's going to need to learn how to interact in that environment. But it's worth it. It's worth it, if you can do it.
Yeah, and a multidisciplinary team makes a lot of sense to you know, lots of very different roles or with different responsibilities in one team. And then a sense of autonomy in that team. Rather than like, you know, pillars of different people where you're kind of like, of, now I need to go and speak to the so-and-so department about this thing that I've just done, that's not really. Like it's obviously, time consuming, you're not going to move fast with that. So you kind of get a big team in, make it multidisciplinary, make it autonomous on solving that problem in the business. And let them just kind of run with it and test things out and prove value with the customer. And let the customer tell them with the things that they're trying to do quickly that they're on the right path or not.
Yeah, move. Yeah, at a certain point, like, you know, get everybody in the room, talk it over. But if you're spending more than maybe a quarter of your overall day talking about doing versus actually doing, something's off kilter. Right, you there should be a strong bias to action, right? There's a reason why that specific thing is one of the Amazon leadership principles. And it's, it's a pretty good one, right? Like, have a bias towards action towards doing, you know. That doesn't, that's not a replacement for talking or gaining requirements. But just in proportion, be aware of where how your time is being spent, and also move fast. Right? If you spend your time getting blocked or waiting on other things, or you find that there's dependencies, it's a perfect time to enlist your executive sponsor to say, hey, this needs to get cleared, right. We're not being able to move at the rate that we need to be so let's let's go clear these dependencies. You know, we've we found a pothole in the paved road. Let's go smooth that over so we can not trip over that problem anymore.
Yeah, that's good. I know, you're gonna talk about Lean as well weren't you. Like agile, right? There's loads out there to be to be fair. We could spend like a million years talking about it couldn't you. But lean, which gets branded around all the time, doesn't it? Oh, 'we're lean' you know, 'we're a lean startup'. The lean framework, you know.
So there's a there's not a great understanding in just sort of industry wide, I don't think about the differences between agile and lean. And also the fact that they're very complimentary, in the sense that they kind of solve different problems. Agile, think of it this way, as an analogy, agile is very sort of, like think of agile as a way to run an R&Dteam. It's a way to handle experiments to reduce uncertainty and to rapidly iterate on ideas to the end, or you know, to drive the result of having something that you can actually run and test and deliver into into an environment and begin to evaluate. Lean addresses a different thing. Lean addresses flow within a process, how do you remove blocking processes? How do you make things flow as an end to end process with an ultimate eye towards delivery?
Operational efficiency isn't it? It's really about operating as efficiently as possible.
Right and and there's, there's lots of implications to it. But it'll make sense what I just said when you know that lean was really developed or came from the manufacturing world, and specifically Toyota. Toyota's observations on car assembly lines and how to optimise the flow. Basically, parts go in one side, finished cars go out the other side, right. Agile is sort of a way for you to handle the overall design and how you build the parts. Lean is how you get them out the door. So in a certain sense, if you want to think of CI-CD agile really has a lot to do with CI, lean has everything to do with CD. Right How you how you ship product and move it through. It's about frictionless flow and making things really repeatable. So when your process when you have a delivery mechanism or delivery process that starts to look more predictable, once everybody settles into a rhythm, you know, you're agile in your thinking and your methodology for building the thing that you're going to deliver and lean in the way that you move it through the stages to get it into production. From development to staging, test, whatever your environment is, in the middle to production. That promotion process. That's where lean comes in.
Yeah, I guess, as part of it, that the feedback loop. So if you're a developer, you're engineering something, how long does it take you to know the thing that you've just built or the code that you've just written isn't gonna work? Right, and you've got to make changes. If that takes a really long time for you to find out, and you're like, your tests are running for six hours, and blah, blah, that's not that lean, you know, you want to kind of know pretty quickly. And that's where being lean matters. That's the process side, versus obviously the agile side, which is how you writing the code. You know, the way the teams working, the requirement gathering, the speed at which you've broken down all the tasks into different kind of minute tasks that anyone can kind of come and pick up very quickly and very easily. They know what the epic is, which is obviously like, Hey, we're trying to solve this problem. Here's all little smaller tests that contribute to solving that problem. And then obviously, lean is then like, well, how fast do I know I've solved that problem? How quickly do I get feedback?
Yeah, exactly. So we're going to talk about testing in just a second. So I do want to revisit an aspect of that in a minute when we get to testing because you're exactly right. I know knowing how, and when in the flow to place tests. Meaning ome tests are more appropriate to run early, some tests are much more appropriate to run way later. And knowing and knowing where to place them in the flow, I think is is some wisdom that we can pass along. But really, so what we're talking about here is kind of the baseline methodologies that you should at least be aware of, and and be conversant in and and try and get used as a, let's say, a baseline to gauge your own activities around. So if we're talking about just pure practical nuts and bolts, things that you need to do at this stage.
The first one is if you're looking at the process of what you're building, and the process that you define, is that everything should all everything that sits around the build process should be automated, right? No more manual builds no more builds on developer desktops, utilise an automation system to make those builds highly automated and highly consistent. That should be sort of ground rule number one. Ground rule number two is that everything. And I do mean, everything we've already talked about infrastructure is code. And you know, if it's cloud formation, or ARM templates, or however you define environments, in your in your cloud of choice, all of that, everything that's necessary to build the application and deploy it and define the environment that it's going to be running should be stored in revision control. If you don't have that, under revision control, if you don't have those code assets, in a form where you can actually track the change of them over time, you're going to run into trouble. And those are really the two key things. And if you want to call that DevOps, if you want to call it CI/CD, if you want to call it agile, lean, whatever, like call it Al, like you know, give it any name you want. But that's at a bare minimum what you've got to have before you proceed. If that means that you take need to take, as we've talked about before, a good a good pause, a healthy pause to learn how that works, and increase the knowledge level of the team. Do it now.
Yeah, definitely. And the reason is, is obviously, if you don't know, there's many reasons to be fair, but the most obvious and tangible would be like, you've got it all on your machine, and then the machine breaks, right? Like, where's all the code gone? You know, or you go on holiday for, like ages. Who else can start contributing to the code in your absence? Like how do I check that code out?
Or someone a job offer that they can't turn down and they walk out the door and go work somewhere else, and they take all that knowledge with them.
Okay, Joel, have you got something to confess here?
I know nothing.
But yeah, absolutely. So that makes that makes a lot of sense, really.
Yeah. So what does that mean from a tooling perspective? Well, it means at a bare minimum, you're going to need a revision control system. We will talk a little bit more about that in a minute. I would say that if you already have one that everybody's comfortable with don't don't go trading horses yet, right? Just stick with what you're comfortable with for now, early on. But we're talking about things like subversion. You know, if you're a Microsoft shop, it might be Azure DevOps. It might be GitHub, or Git generally. I don't know what the what the demography is these days, but it's, it's...
I think Git's pretty much taken over. By now, like with, like Perforce, as well. And all the others, I don't think.
Visual sourcesafe. But yeah, I mean, there's been a ton of them over the years. But you know, if you've got one that works, and everybody's comfortable with stick with it for now. You can always make an informed evaluation about switching switching horses later. But yeah, the majority is, is Git for some specific reasons that we're going to talk about in a minute. But you just need a revision control system, like any will do at this point.
Although Git is better, let's just point that out. Like, probably just use Git.
Git has some advantages, for sure. You're gonna need a CI system. And there's also about 9000 of these, we touched on that earlier, when we talked about the CNCF landscape. If you want to refer back to that landscape, you will see the large menu of options available to you.
There is actually a great GitHub called awesome CI. I think you know how on GitHub they tave like, awesome, whatever, you know, I think there is an awesome ci which basically gives you a massive list of all the different ci tools, we'd like a brief overview of each one. on GitHub, that's always worth a look, if you like, it's your first time to see it.
Yeah, and there's a bazillion of them out there. And, you know, for for the purposes of what we're talking about, and really getting started. You know, again, I would say if you have something that people have a reasonable familiarity with already, go with that, right? Run with that until you find the need to switch that till you've exhausted its capabilities or find something that it won't do. And then at least make the informed choice to go somewhere else. But just pick one, right there. They're all kinda it's kind of a commodity thing these days, it's just an orchestration system. So whatever you're comfortable with use that.
And then the last one is the one that probably needs a little bit of explanation. And it's one that I see omitted in a lot of in a lot of cases, and that's an artefact repository. Now, what we're talking about here is something where you can put your built binaries, right? It's the output, where you're going to store the binaries at the output of the build process. Now hat output artefact could be binaries, like I said before, it also could be container images, it can take a lot of different forms. But artefact repositories perform the same function for built artefacts as a revision control system does for code. And there is so there's a corollary, you know. You have revision control on your code, you have revision control on your artefacts and the things that you're actually going to be promoting in to your environment to run. For the love of God, please don't use a file share. It's probably the biggest anti pattern that I see. That's just pervasive. Believe me, that is a recipe for disaster, right? Just get used the right tool. And it's not a file share.
And I think obviously, this becomes imperative if you've got shared things. Like shared libraries, shared jars, that you don't want public to the internet, you know, you you want to keep them internal, then you do need to obviously for other teams to be able to kind of reuse those, you're gonna need some artefact storage to pull them in. Right. So that's, that's where it's at.
Yeah, 100%. And, you know, if you're coming from a world where everything's monolithic, and everything gets built a built in one big bang, you may have a little bit of a hard time understanding why I'm harping on this so hard, right? But believe me, if you don't believe me now, go ahead. Try it without it. Right, you will see what I'm talking about relatively quickly. Once you start to build more a non monolithic applications when you I don't want to be say it's prescriptive to microservices, but let's say distributed architecture applications, you will find this to become an insurmountable problem without a tool like an artefact repository.
Yeah, I mean, if it's containers, then it's a must at the end of the day. So it's like, you kind of have to have it because there's no choice. Yeah, there's no choice.
So and then the other thing that you're going to need, and this is where we were going a minute ago, testing tools, right? Now your organisation is already going to have some collection of testing tools, right. And it's going to be appropriate to the language that the applications are written in, as well as the tests that you're attempting to perform. And again, there's literally 1000s of these things, right. And they do all sorts of different things. So I'm not going to be super prescriptive on this. I'm just going to hit on likemaybe four main categories of tools that you should kind of look at what you've got to work with. And make sure that you've got coverage across these four broad categories.
As long as you do, you're good to go, just make sure that you can integrate them with your CI system to automatically run tests. Because again, that's part of the that's part of the process. So the first one is UI testing. So if you're writing applications that do have a GUI component, they do have a UI component, you will need something like selenium, or equivalent to go and do some some UI testing, whether that's just basic UI testing, just kind of smoke or sanity check tests, all the way up to in a more extensive, you can run full UI regressions with those tools. So just make sure that you've got something that can handle that if it applies to you.
The next one is logic unit testing. So what we're talking about here is if you're writing discrete services, let's say that there are Java services. And again, that's just as an example, you're going to need something like j unit, something that lets you define a test to make sure that the logic that you've written into that service actually does what it's supposed to. And the things like j unit, and there's there's others that are language specific, make it really easy to define unit functional tests like that.
The third category is static analysis. Now static analysis is criminally underused, in my perspective, it is really, really helpful, especially early on, because before you even execute a build. You can do static analysis against the code of your service, or whatever it is that you're building. And you can use it to look for known bad patterns, like hard coded credentials or secrets. In the code, you can look for even style or structural things you're trying to enforce, you know. Static analysis, if used properly, can be very, very powerful in catching things early. And also that to the feedback loop point, providing you with feedback of like, hey, yeah, not even gonna try and build this because you didn't do a, b and c. It doesn't have comments. It's not, you know, following the standard form. You're not including like the, the the standard library that we require all services to have. Whatever it is, like, I'm not even going to try and build this because we know that you haven't met the requirements.
And some of those are kind of linters. So sometimes you have those kind of linters, which can be part of the language, also there's code coverage, which is obviously like, has all the functions got a test, do you know what I mean? And what's the percentage that I expect on my code base to have all the code coverage now that all the code is healthy, and everything's been tested, because there's all the unit tests to test it, etc. So, you know, this is where you can start being you can be a bit more, you know, opinionated and start gating things, which is kind of what Joel is saying. We want to kind of have a healthy team, I'd be like, actually, our rules as a team are that we expect 90% code coverage, right? That's the rule of thumb. Anything less than that is just not quality. That's our quality measure. So there are all kinds of things you can kind of do so yeah.
Yeah. And there's, there's a bunch of static analysis tools. The one that I bumped into the most is SonarQube that is the most popular, isn't it? Yeah, it seems to be kind of everywhere.
And then the last category is, is performance testing. So and that takes a lot of different flavours, just from from load tests to, you know, extended performance tests, you know, from like, open source ones, like j metre, all the way through to load runner and really esoteric, you know, performance, performance measurement tools. When we talked earlier about knowing where to put what tests, this is kind of getting to what I was talking about is static analysis really should be the first stop. Right? Static analysis should be the past before the build, right? The build in and of itself is a sort of test, right? Because if you've got something sideways in the code, you know, in a lot of cases, you just hope that the build fails. Right? And that's, that's in and of itself a test. But let's say it doesn't fail, it succeeds. Then the next thing is, you know, UI functional testing smoke testing. Let's get it into a very low level environment. Let's fire it up and see if we can do a unit test like does it respond? Does the API respond?
Also blackbox testing as well, which is like, relying on nothing, right? You don't rely on networks. You don't rely on networking, you know, because otherwise you can't like, like we said before, you're not lean, you know, if you need all those things before you can even test somethin. Not lean. Right? So it's totally the opposite. Blackbox testing means you can run tests in total isolation against that thing. Containers are obviously great for that, because you can start things up and quickly run tests very fast hence kind of a bit more lean. And, you know, same with performance testing, you know if your performance testing when you deploy it or a performance testing the network or your test performance testing the app.
Well, yeah. And so the the point that I was making is that, you know, static analysis is really lightweight and can be applied prior to the build, right? Then you've got unit functional tests, right? The next stage is integration tests, those tend to be in terms of time. Just pure runtime, those tend to be more expensive than unit functional, right? And then working your way up to regression tests. You shouldn't even bother to try and run a regression test, unless it's past integration. And unless you have a fairly high, you know, your certainty level is pretty good. That what you're what you're testing is something that could make it to production.
Yeah, it's also exploratory tests. Let's not forget those. It's like, things you would never even think about doing. You know, which is very manualesque, which is like, obviously comes even later, which is like things that you didn't even think to test. It's situational.
Yeah it's the old joke, a QA engineer walks into a bar orders, negative one, beers, orders, square root of beer. You know, so, you know, weird edge case, you know, like, literally, let's, let's intentionally try and break this and see what happens.
Yeah. That's the famous test pyramid. I mean, I think a lot of this, that there's the famous test pyramid, where like, obviously end to end tests, I think, at the very top. So they should be very small and not, you don't need as many and then it goes into its function. I can't remember all the order.
Yeah. And there's, there's lots of different opinions on this. But the point that I'm making is, you know, early on, just have an idea of, you know, save your expensive tests for late in the process. Run, the stuff that you run early on, try and craft them in such a way that that if it's going to fail, it fails incredibly quickly. Right. So that if something's wrong, you know, very, very fast, and then that kicks off the feedback loop to say, hey, all right, it went in roughly 60 seconds later, you get back Nope, doesn't work. And here's why here's the result of the test that failed. That's the stuff that you need to be able to iterate really, really fast.
And I should say, while we're on the subject of testing that, you know, you might think what I'm saying is, is he telling me that I should be following test driven development? If you don't know what test driven development is, it says that before I line, write a line of application code, I'm going to write the test that defines what the the thing that I'm going to write must be able to do in order for it to be complete. So I write the test first, and then I build, I write the application code in such a way that it's going to satisfy the test. And once the test is satisfied, I know that my development task for that service is now complete.
Right? I think it's important to note though, in all of this, because I've seen the reverse, where if you don't do it in kind of a testing pyramid, where you kind of do end to end, you know, you don't go all in on like loads of heavy integration tests, what what people do is they spend more time fixing the tests, because the code was fine. The code was fine all along. It was the tests that were kind of broken, and they spend loads of time fixing up all the tests, because they're in this unwieldly kind of world of like so many integration tests, because they're relying heavily on that over like relying on like, you're kind of saying extra typically put into tests incrementally in the right places. And then you have a light touch kind of enter went over the top.
Yeah, if the test is a direct referral is a direct embodiment of the requirement that was given. Then you know that what you develop is correct when the test passes, right? Also, the collection of all of those individual tests, informs what a regression test would look like. What a full regression would look like. So you know, there's a lot of advantages to doing that. But yeah, I can hear you now that you can put some clarity behind that I can hear you thinking like, is he telling me that I should follow test driven development? Answer is, yeah, you really should. You know, if you don't, you're playing with fire, in my opinion, especially with distributed systems, if that's the architecture path you're going down. If the thing that you think you're going to write for to run in the cloud is microservice, and that's not a bad thing at all, then this is absolutely something that you should do. Now, there are other frameworks that you can follow that achieve the same result. behaviour driven development is a bit like sort of it's a flavour, the same idea, but it's implemented in a slightly different way. But even if you just follow like the basic pattern of TDD, you're going to end up with more consistency and a better development experience overall, right? It's just gonna cut out a lot of the guesswork out.
Yeah. Then there's BDD, obviously, business driven development, which is sometimes better, especially when you kind of get into unit tests, you know, like, what functionality actually failed, as opposed to what arbitrary random kind of piece of function failed, it's kind of giving you a bit more tangible business value output on what didn't work from a business perspective, as an outcome on that service, which is also better for other people in the team, they can work out what's failing, that isn't necessarily domain knowledge on the code itself at a really low level. So BDD can be much better in that sense. And be more transparent. On our I see, the language is a bit more well understood. And I can kind of interpolate it much easier.
Yeah, I mean, there's tonnes of things that there's tonnes of frameworks that have been put together over the years with that, really have the express purpose of making these patterns, simpler to adopt. And there's, there's a bunch of them out there. And, you know, I would just say familiarise yourself with what's available, and pick one that you think looks the best, you know, just pick one that that you you know, is that speaks to you. One, one that you think is is going to be easily understood by your team. But moving from tests, to testing tools, on to revision control, now I can I can I can feel you reaching for your radio, or your your iPhone, or whatever to turn turn this off, because we're not going to talk about revision control. But seriously, don't. Stick with me.
This is gonna be so exciting, I can't wait Joel. I'm on the edge of my seat right now.
It is such a riveting topic. But seriously, I mean, if I had to, like if i if i rewind my career a little bit when I was on the consulting side, and working with with companies that are trying to do what we're describing right now, and getting into their architecture and understanding how they actually build things. The top number one thing that I saw people get wrong consistently over and over and over again, was a lack of understanding of what proper revision control looks like. It's just a pervasive problem. And for whatever reason, you would think that this is something that everybody should know or just knows by default, but it clearly isn't. So we're going to have a quick tour of what sustainable revision control or version control practices really look like. If your team is distributed, and most teams are these days, you know, almost no large company has everybody sitting in the same room or even in the same building, when in a lot of cases, not even in the same country now right? Then this is one of the advantages of using Git because Git is meant to operate in a decentralised way. Right? Subversion and Azure DevOps, the on prem version, and TFS. Prior to that, and CVS and a bunch of these other things are very centralised version control systems, which have a different notion of how to how to work but get is going to make working in a distributed environment much, much easier. And there is a well defined workflow and process for being able to utilise get in this type of environment. And it's called Git flow. And we're going to tweet out a link to some documentation. That's actually it's Atlassian's version of the Git flow documentation.
do have a little bit of an opinion on this, though, by the way, git flow is that it's merged over rebase. And so I will challenge a little bit on some of this. Because this is a whole separate topic, which maybe we'll have to discuss another time. But there is rebasing, sometimes it's a lot better when you kind of get the history right, rather than, you know, all the incremental changes going on, versus just big fat merge that you kind of got to do over the top is not always the best approach. But anyway, it's kind of
Yeah, I think this is really good. Because if you're, if you're not used to, if you've never seen this before, or you're not, haven't read through the thought process that goes into GitFlow, I would say, you know, some of the stuff like what Jonmentioned, is is absolutely fair criticism of GitFlow. But I would say it's, it's also, it's something that probably isn't going to be meaningful to you until you've been doing it a while. Because really, what Git flow does is it provides you with a structure for being able to very, in a very controlled way contain change sets, and be able to deal with the impact of a given change and how it impacts the rest of the code base. That's really the function of get flow is it's it's a way to isolate change sets to limit the impact of what that change is and then in a solid active way, bring those changes together to a merged version or a unified version of the code that encompasses not only the the, the code that existed before, but also the specific changes that you want to integrate at that point in time.
Yeah, you're essentially working on a feature branch. Because you know, you're working on features, that's what we've kind of discussed before. You've got some epic, there'll be bits to that epic, some of them could be smaller, more specific, like very niched and tightly described feature sets. And so you create a feature branch - should be short lived, obviously don't want it lasting for ages, because other people could also be working on feature branches. And as they're kind of done working on their feature branches, and then they finish before you. And all these things start happening, you're already way behind, right, your feature branch has taken absolutely forever. And now there's been loads of changes in parallel to yours, that now make it a bit of a headache, because those changes are so big, that you've got to now get those changes into your changes properly before you can then effectively merge it in to a release in the end, which will actually become but they call that main, which is the main branch, which is what you need to merge it into. So this is about, you know, these are kind of the principles of having short lived branches, feature branches, because obviously, they tied to the naming thing of what you're working on. And then making sure that you finish fast, which fits into how the team needs to work, which is breaking things down into small chunks, that means you can operate on a feature fast, obviously, for the features the epic, that's not gonna happen. So ake sure that it's small enough to work on quickly.
Yeah, and we're not going to read the gitflow documentation to you. That's not a particularly good use of this time. But we will tweet out the link. And what you should know is that it defines branches that have specific purposes. And it also has scenario driven things as specifically, kind of what Jon was alluding to, of hotfixes. How do hotfixes fit into this, this working model? When when do you integrate things on to onto an integration branch, which they called develop? One thing I will say about this, when you're reading the documentation is they do talk about tagging an awful lot. And tags are effectively just a label that gets applied. And you can think of it as a snapshot. It's a point in time snapshot of where the code was, at that specific moment when the tag was applied. With a with a with a label that you can you can reference going forward. Anytime you're going to merge branches together, anytime you're going to branch things, it's a good idea to tag than branch or tag than merge.
I think it's also good, we'll send out a sem version link as well, because you know, there's an industry standard of how to version. And so you version, usually the tag becomes a version thing. And so the semver that normally feeds into the tagging mechanism, which we can also say which an industry standard, which you can kind of read through, which will make more sense as well.
Yeah, I really the the thing here is there are going to be times, especially as you're getting used to this, that you're going to get to a point, and you're not going to be sure what the right move is. You're going to look at the documentation and you're going to have to try and decipher. All right, well, what's what's kind of the the way that I'm going to what, what movie Am I going to make that isn't gonna back me into a corner, the one piece of advice I can give you is, if you're at all uncertain tag first, before you make a change, because it gives you It gives you a bailout point, if something goes wrong, or you need to undo or rewind back to a specific point in time the tag is going to save your bacon, it's going to let you be able to do that. tags have a way of covering a multitude of sins in a way. So just when in doubt, tack, it can help.
I mean, yeah, I mean, everything is because it's version controlled, all changes can be reverted. So even if you're not tagging, don't worry, you can get it back. Yeah, obviously, because that's kind of that's kind of the purpose of it. But also don't override the tag, because you can actually force an override of a tag, which means if that was supposed to be a snapshot in time, but then time passes, and then you override that snapshot in time with a new snapshot in time, then you're basically losing the principle of it.
Yeah exactly, it should be it should be right once read many. Yeah. So you know, it's, you know, you may be saying, all right, well, but we have a small team, and we all just develop on Main and it's all fine for us. Okay, I get that. This all seems like overhead to you. But trust me when I say it's, it's a practice you really should get into because I mean, hopefully, I would think you want your team to grow. You want to be successful enough that your your application and your teams are going to grow substantially. Don't get into the habit now. It's sort of like Casablanca, you know, you may not regret it now, but You will soon and for the rest of your life, right? You can do awful, awful things to your code base that are very difficult to untangle later on. If you don't adopt some of this good hygiene practices as I like to think of it.
Yeah, sounds good.
So really, that brings us to the close of part one, you know, it's, we've got the methods in place, we've talked through the building blocks of what you need. You know, now it's, it's really, let's build something, let's get it deployed and begin to evaluate it. So on the next episode, we'll pick up this conversation and talk about cloud services. How do you integrate cloud services into your application to the day how they can take advantage of the environment and what it has to offer? We'll talk about mocking tools and how to build mocks and stubs, moving things through into production, a little bit more discussion about flow, and how to move things frictionlessly into into your production environment and between environments. And then also, you know, how to keep your costs under control and not overextend yourself into the cloud.
Yeah, especially through development stage, right, you know, where can you be cost effective in cloud when you're especially when you're debating and moving through? That makes sense.
So, as always, please rate and review us on your favourite podcast app. You can tweet us at @Cloud_Unplugged on Twitter, or email us at firstname.lastname@example.org Also, check us out on YouTube at Cloud Unplugged for episodes, transcripts and some bonus content. As always, thank you for listening and we will talk to you next time.
Speak to you later.
Transcribed by https://otter.ai