Kubernetes announced itself to the world when it brought containerized app management a few years back. Now, the majority of the developer community is using it to manage apps at scale and production to deploy. Along the way, we have curated tips and best practices to make the best use of Kubernetes and Google Kubernetes Engine(GKE). Here are some of the most popular suggestions from industry advisors about the deployment and use of Kubernetes.
1. Use Kubernetes Namespace is a natural resources management:
When you start to build services on Kubernetes, more straightforward tasks become complicated. Employing Namespaces, one of a kind virtual cluster, can assist you with the organization, security, and performance.
In Kubernetes distribution generally, the cluster that comes out of the box with a Namespace called “default.” Although there are three Namespaces with which Kubernetes ships with Default, Kube-system (used for Kubernetes Resources), and Kube-Public ( used for Public Resources). Kube-Public can’t be used for much right now, and it’s a good idea to leave Kube-System to leave alone, especially in a managed system like Google Kubernetes Engine.
2. Precaution is better than cure. It also applied to Kubernetes.
Management of large, distributed systems can be confusing, primarily when something goes wrong. Kubernetes health check is the easiest way to make sure app instances are working just fine. While creating your custom health checks lets you tailor them to your environment.
Kubernetes provides you two types of health checks, and it’s imperative to understand the difference between the two, and their use cases.
Through Readiness probes, Kubernetes will know when your app is ready to serve traffic. Kubernetes ensures that the readiness probes pass before allowing a service to send traffic to the pod. If the readiness probe begins to fail, Kubernetes will stop sending traffic to the pod until it passes.
Through the Liveness probe, Kubernetes will know the status of your app, whether it is alive or dead. If your app is active, then Kubernetes will leave it alone. If your app is dead, Kubernetes will remove the pod and starts a new one for its replacement.
3. Keep tabs of your deployment with requests and limits.
There’s much to appreciate about the Kubernetes scalability. However, it would be best if you still keep tabs on resources to make sure containers have is enough actually to run. It’s quite simple for teams to spin up more replicas than they require or make a change in the configuration that eventually affects CPU.
Requests and limits are the mechanisms that Kubernetes uses to control resources (CPU and memory). Requests are guaranteed to be received by the container. If a container requests for a resource, Kubernetes will only schedule it on a node that will provide that resource. However, Limits make sure a container never goes above a defined value. The container is allowed to go up to the limit, but not beyond that.
4. Identify services that are running outside the cluster
There might be services that living outside your Kubernetes cluster that you will want to access regularly. Moreover, there are only a few different ways to connect to these services, like external service endpoints or ConfigMaps.
An example of this is databases. While some cloud-native databases like Cloud Firestore or Cloud Spanner use a single endpoint for all-access, for different instances, most databases have separate endpoints.
5. Determine whether to run databases over Kubernetes or not
There are a bunch of factors when you are thinking about running databases on Kubernetes. It can make your life smoother to use the same tools for databases and applications, and get similar benefits of repeatability and rapid spin-up.
Before you seriously consider running a database on Kubernetes. Here is a brief review of all options that are available on the Google Cloud Platform.
Fully Managed Databases:
This comprises Cloud Spanner, Cloud Bigtable, and Cloud SQL, among others. It is the low-ops option since Google Cloud works behind the scene and handles many of the maintenance tasks such as backups, patching, and scaling.
Do-it-yourself on a VM:
This can be defined best as the full-ops choice, where you need to take full responsibility for building your databases, scaling it, managing reliability, setting up backups, and much more.
Run it on Kubernetes:
To run a database on Kubernetes is close to the full-ops option. However, you do get the benefits in terms of the automation Kubconernetes provides to keep the database application running.
6. Understanding of Kubernetes Termination Practices
All good things never last for long, even Kubernetes containers. Since Kubernetes does a lot more than monitor the application for crashes. It can create more backups of your application and also run multiple versions of it at the same time. It means there are various reasons why the Kubernetes force terminates a perfectly running container. If you update the rolling update for your deployment, Kubernetes slowly terminates old pods while spinning new ones. Else if you drain a node, Kubernetes will start the termination of all the pods on that node. Moreover, If a node runs out of resources, Kubernetes will terminate pods to free those resources.
Must Read: Can Google Cloud beat AWS? Yes, here you go