Kubernetes 101: Tips and tricks to maximise container investment
According to a Gartner survey, more than 50% of global organisations will be running containerised applications in production by 2020. This is a significant increase from the less than 20% reported in 2017. If an organisation uses containers, it probably also uses an orchestration tool like Kubernetes.
While containers can help teams gain agility, flexibility and ultimately increase delivery speed, they also create a lot of complexity. Because of this, it is important that DevOps teams have monitoring in place to increase visibility and reduce the risk of failure. Here are a few tips and tricks to achieve successful monitoring that will help maximise a container investment.
Monitor apps alongside infrastructure
With traditional servers, the best practice is to monitor application performance metrics and the health of the server running that application. Kubernetes takes the notion of traditional layers to onion-like proportions, however. Suddenly IT teams are faced with monitoring not only the server and application, but also the health of the containers, pods, nodes, and the Kubernetes Control Plane itself.
Monitoring the Kubernetes control plane and master components is essential when it comes to maximising the ROI of container investments. If these components are unhealthy, container workloads may not be scheduled properly, which means business applications may not be running. Unscheduled downtime has a direct and detrimental impact on an organisation's SLAs and customer commitments.
Additionally, it is important to ensure the organisation is still thoroughly monitoring the applications themselves to get the full picture. The company must ensure that its monitoring tool is capable not only of monitoring containerised applications, but doing so based on best practices. It must also be done in an easily extensible manner which allows monitoring of custom applications within the environment.
Monitor at the service level
As the underlying infrastructure for applications gets more complex, it is important to ensure that a team remains focused on the applications and services that are critical to the business. If there is too narrow of a focus on individual containers, the organisation is likely to miss the big picture. If there is an alert on a specific container, it does not necessarily indicate that there is also a problem with other containers, for example. They may be operating effectively, in which case one going down does not mean that overall service has been negatively impacted.
To maximise ROI and stay focused on what matters, companies should use monitoring to aggregate key performance indicators across the different containers, therefore getting an overall level of service or application-level view.
Automate monitoring to save time
Kubernetes dynamically schedules containerised workloads across nodes in the cluster to best use available resources and still meet workload requirements. Manually adding and removing dynamic containers and pods to and from monitoring is time-consuming, inefficient and unrealistic. If an organisation's monitoring solution requires manual changes to add/remove resources from monitoring, configure metrics to be collected, or even specify when alerts should be triggered, the IT team will end up sinking valuable time into resources that are short-lived to begin with.
Just how short is the typical lifespan of a container? Sysdig's 2018 Docker Report found that:
- 95% of containers live less than a week
- 85% live less than a day
- 74% live less than an hour
- 27% live between five and 10 minutes
- 11% live less than 10 seconds.
This limited lifespan is not a bad thing. After all, their ephemeral nature is why companies choose to implement containers in the first place. With such a short lifespan however, it is vital that IT teams automate container monitoring, including adding and deleting cluster resources, to reduce the amount of manual effort involved.
Obtain a unified view of monitored resources
Many companies using Kubernetes have high levels of complexity in their infrastructure. This includes operating both in the cloud and on-premises and using containers as a unifying layer to standardise application management and deployment. By using monitoring to create one central location for everything, organisations gain a unified view via a single pane of glass that shows metrics for their different environments.
In a modern IT infrastructure these environments are all connected, so without a unified view it can be challenging and more time consuming to troubleshoot issues. In addition, with the high levels of complexity inherent in modern infrastructure, monitoring tools must provide more intelligence to help users remain proactive. Unified monitoring ensures this intelligence can be applied and leveraged across the entire distributed IT infrastructure.
Using Kubernetes empowers developers to own what they are running in production as they push towards organisational goals such as continuous delivery and the over-arching objective of getting products out faster to end customers. However, it is crucial to remember that monitoring is a non-negotiable step if the company wants to truly reap the benefits of a Kubernetes investment.