Kubernetes…

Radha
4 min readDec 30, 2020

Kubernetes or K8S is portable, extensible, open source software that allows us to deploy and manage containerized applications at scale.

k8s is just a powerful management(orchestration) tool which uses docker behind the scene and not the one who launches the containers, but manages the containers.

K8s is an open source project, which can be used to run containerized applications anywhere without a need to change operational tooling.

Key features of K8S….

  • In k8s, operating system is known as pod and in docker it is known as container.
  • But pod is exactly not equal to the os/container, pod is just a way to manage things..
  • For launching a pod, k8s require a container engine, which can be docker.
  • k8s has its own load balancer which launches by its own.

K8S working…

K8s manages a cluster of compute instances and schedules containers to run on the cluster based on the available compute resources and the resource requirements of each pod.

K8s automatically starts pods on cluster based on their resource requirements and automatically restarts pods if they or the instances they are running on fail. Each pod is given an IP address and a single DNS name, which it uses to connect services with each other and external traffic.

K8s provides a framework to run distributed systems strongly. It takes care of scaling and failover of application, and provides deployment patterns.

K8S provides us…

  • Load balancing
  • Storage orchestration
  • Rollouts & rollbacks
  • configuration management and secret

Now, let’s see.. how does industry uses k8s and what all are the use cases solved by k8s.. :)

There’re 50+ industry use case of k8s…

Some of them are…..

Open AI

Challenges..

An artificial intelligence research lab, Open AI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.

Solution

OpenAI began running Kubernetes on top of AWS and after some time, it migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity.

Impact

The company has benefited from greater portability, “because k8s provides a consistent API, we can move our research experiments very easily between clusters” said by Berner. Being able to use its own data centers when appropriate is “lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud,” he adds. “As long as the utilization is high, the costs are much lower there.” Launching experiments also takes far less time: “One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work.”

Spotify

Challenge

Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. “Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,” he says.

Solution

“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.

Impact

The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. “A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we’ve heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.

There are still many more use cases of k8s…
It’s like endless and future of management..
Well..
That’s all for now…
Thanks for reading…!!

:)

--

--