This article is part of a series that highlights technical aspects of digital transformation.
Automated testing
Learning is about getting feedback and improving our ability to achieve a goal. Automated testing allows us to bring important parts of the feedback loop closer to our development department. When we make changes to our code, we can set up regression tests to catch bugs or unexpected deviations from intended functionality. Although unit testing is critical, a strong emphasis for automated testing should be on scenario or functional testing. These types of tests force us to think about the API of the service and also allow us to gain insight into how our code is used, since our tests are ultimately consumers of our API. A strong focus on feature testing will also confirm the behavior expected by our customers, not just the inner workings of the code. Finally, as the details of the implementation change due to refactoring, paying off technical debt, maintenance, etc., our functional tests should rarely need to change.
Testing is also not limited to pre-production with pre-defined tests. We should strive to find safe, low-risk ways to push code changes to production and explore the nooks and crannies of our systems that pre-defined tests would miss. Testing in production is a way to understand whether your code changes are performing as expected. Exploratory testing allows you to inject chaos into the system and observe its behavior in a controlled manner. It is quite difficult to reproduce an exact copy of a production environment (with its configuration, infrastructure, networking, etc.) in staging or QA environments. Tests performed in these lower-level environments can give a false sense of confidence when it comes to making changes in production. The only way to know that your changes are ready for production is to run them in production and observe their behavior. There are ways to do this with Istio while mitigating any negative consequences. There will be a separate article on Istio.
Container
Linux containers have helped bring about a massive change in the way we build, package, and run our applications. Container technology like Docker makes it easy for developers to take their applications, configurations, and dependencies and package them into an „image“ that can then be run in „containers.“ Containers run your application as simple processes on a host that is isolated by built-in Linux primitives.
In the past we may have done this with VM images and VMs, but containers allow us to package only the necessary parts and share the underlying Linux kernel. They are much lighter and also bring benefits in terms of hardware density. Once we are in containers, we can safely move applications between environments and avoid diverging configurations and environments.
Because containers provide a simple, unified API for starting, stopping, inspecting, and checking applications, we can develop generic tools to run those applications on any kind of infrastructure that is Linux-enabled. Your service operator and deployment tools no longer need to be hand-crafted for specific languages and their idiosyncrasies. For example, Kubernetes is a leading container platform that, with a high-level application API, can deploy and manage containers on a cluster of machines with sophisticated orchestration, placement, and security measures. Kubernetes has constructs like „deployment“ that can ensure a minimum number of instances for a given service and can also actively perform health checks.
Continuous integration and Continuous Delivery
In the previous section, we looked at getting feedback during the development cycle with automated testing. To be successful with a cloud-native or microservices architecture, we need to automate the mechanisms for building and deploying code changes to production as much as possible. Continuous integration is a practice that provides developers with feedback as quickly as possible by forcing the integration of code changes as quickly as possible, ideally at least once a day. This means that code changes made by all team members are under version control, built, and tested to ensure that the application is still stable and can be released when needed. Continuous delivery builds on continuous integration (CI and CD) by giving teams an automated pipeline to deploy their application to a new environment. Continuous delivery orchestrates the steps required to successfully take an application from code commit to production. We should reduce the complexity associated with any attempt to automate deployments, which is why containers and container platforms are an excellent choice for building continuous delivery pipelines. Containers package our applications together with their dependencies to reduce configuration changes and unexpected environment settings, so we can be more confident when deploying our applications.
Although deployment plays a key role for a CI/CD platform, routing traffic is equally important. What happens when we roll out a new deployment in an environment, especially in production? We can’t just shut down the old version and bring up the new version, which would mean an outage. Nor can we replace it. What we want is a way to control the rollout of a new release by selectively bringing traffic to the new software version. To achieve this, we need the ability to control traffic. For example, with Istio, we can control traffic to new deployments in a fine-grained manner and reduce risk. Since we strive to do deployments quickly, we should also reduce the risks for those deployments. Istio helps with this.