Daily Exercise 002: How-Kubernetes-Deployments-Work
Day 1 Script
Today we will discuss how deployments in Kubernetes work, specifically how you can use them for reliable zero downtime upgrades. Suppose you have a simple application with three replicas controlled by a deployment. These replicas are pods and a load balancer provides traffic to each of them. In the deployment, you are likely using the first version of your container image. This means each of your three pods is running version one of your application. When you decide to push a new image to the deployment object, it will trigger a rollout process. The deployment itself takes responsibility for updating your application across the cluster. We will explore how this happens without interrupting the service for your users.
Day 1 Translate
| Japanese | English |
|---|---|
| 今日は話し合います、Kubernetesのデプロイメントがどのように機能するかを、特に信頼性の高いゼロダウンタイムアップグレードのためにそれらをどのように使用できるかについて | Today we will discuss how deployments in Kubernetes work, specifically how you can use them for reliable zero downtime upgrades. |
| 仮定してください、デプロイメントによって制御される3つのレプリカを持つシンプルなアプリケーションがあると | Suppose you have a simple application with three replicas controlled by a deployment. |
| デプロイメントでは、おそらくコンテナイメージの最初のバージョンを使用しているでしょう | In the deployment, you are likely using the first version of your container image. |
| これは意味します、3つのポッドのそれぞれがアプリケーションのバージョン1を実行していることを | This means each of your three pods is running version one of your application. |
| デプロイメントオブジェクトに新しいイメージをプッシュすることに決めると、それはトリガーします、ロールアウトプロセスを | When you decide to push a new image to the deployment object, it will trigger a rollout process. |
| デプロイメント自体が責任を負います、クラスター全体でアプリケーションを更新することに | The deployment itself takes responsibility for updating your application across the cluster. |
Day 2 Script
When you change your deployment from version one to version two, the update does not happen immediately. It is not as if all your containers are replaced at the exact same moment. There are two main reasons for this gradual approach. First, if we replaced everything at once, we would take down every replica of your service and it would be unavailable. Second, you might have flaws in the new version. You do not want to move everything to version two if it is going to start crashing. Therefore, we need to perform a gradual rollout from one version to the next while maintaining access for all users. This careful process ensures that your application remains stable and available throughout the entire upgrade.
Day 2 Translate
| Japanese | English |
|---|---|
| デプロイメントをバージョン1からバージョン2に変更しても、更新はすぐには行われません | When you change your deployment from version one to version two, the update does not happen immediately. |
| すべてのコンテナがまったく同じ瞬間に置き換えられるわけではありません | It is not as if all your containers are replaced at the exact same moment. |
| この段階的なアプローチには2つの主な理由があります | There are two main reasons for this gradual approach. |
| もしすべてを一度に置き換えたら、サービスのすべてのレプリカを停止させることになり、利用できなくなるでしょう | If we replaced everything at once, we would take down every replica of your service and it would be unavailable. |
| もしクラッシュし始めるのであれば、すべてをバージョン2に移行したくはないはずです | You do not want to move everything to version two if it is going to start crashing. |
| この慎重なプロセスは確実にします、アップグレード全体を通してアプリケーションが安定し、利用可能な状態であることを | This careful process ensures that your application remains stable and available throughout the entire upgrade. |
Day 3 Script
To achieve a smooth rollout, Kubernetes uses two important concepts called liveness checks and readiness checks. Together, these define what it means for a pod to be healthy. A liveness check determines if a pod should be automatically restarted, for example if the application is stuck. A readiness check determines if your application is actually ready to serve traffic. When you change the version to version two, the deployment creates a new replica of your application. Assuming the container passes its liveness check, the system keeps it running but does not add it to the load balancer yet. Only when the readiness check passes, traffic is directed from the load balancer to this new container.
Day 3 Translate
| Japanese | English |
|---|---|
| スムーズなロールアウトを達成するために、Kubernetesは2つの重要な概念を使用します、ライブネスチェックとレディネスチェックと呼ばれる | To achieve a smooth rollout, Kubernetes uses two important concepts called liveness checks and readiness checks. |
| これらは合わせて、ポッドが健全であるとはどういう意味かを定義します | Together, these define what it means for a pod to be healthy. |
| ライブネスチェックは判断します、ポッドを自動的に再起動すべきかどうかを、例えばアプリケーションが停止している場合など | A liveness check determines if a pod should be automatically restarted, for example if the application is stuck. |
| レディネスチェックは判断します、アプリケーションが実際にトラフィックを処理する準備ができているかどうかを | A readiness check determines if your application is actually ready to serve traffic. |
| コンテナがライブネスチェックに合格したと仮定すると、システムは実行を継続しますが、まだロードバランサーには追加しません | Assuming the container passes its liveness check, the system keeps it running but does not add it to the load balancer yet. |
| レディネスチェックに合格したときだけ、トラフィックはロードバランサーからこの新しいコンテナに向けられます | Only when the readiness check passes, traffic is directed from the load balancer to this new container. |
Day 4 Script
Once the new container is up and serving traffic, the deployment decides to delete one of the old pods. However, we must ensure that existing user traffic is not interrupted during this deletion. To handle this, every pod has something called a termination grace period. By default, this period lasts for thirty seconds. When the deployment decides to delete a pod, the pod moves into a terminating state. The connection to the load balancer is severed, but the container stays running for the rest of the grace period. This means any active requests are processed successfully, but no new requests are sent to the old container. After thirty seconds, the container is fully deleted and removed from the system.
Day 4 Translate
| Japanese | English |
|---|---|
| 新しいコンテナが起動してトラフィックを処理し始めると、デプロイメントは古いポッドの1つを削除することを決定します | Once the new container is up and serving traffic, the deployment decides to delete one of the old pods. |
| この削除中に既存のユーザートラフィックが中断されないことを確実にしなければなりません | We must ensure that existing user traffic is not interrupted during this deletion. |
| これに対処するために、すべてのポッドには終了猶予期間と呼ばれるものがあります | To handle this, every pod has something called a termination grace period. |
| デフォルトでは、この期間は30秒間続きます | By default, this period lasts for thirty seconds. |
| デプロイメントがポッドの削除を決定すると、ポッドは終了中の状態に移行します | When the deployment decides to delete a pod, the pod moves into a terminating state. |
| これは意味します、アクティブなリクエストは正常に処理されますが、新しいリクエストは古いコンテナには送信されないことを | This means any active requests are processed successfully, but no new requests are sent to the old container. |
Day 5 Script
After the first old pod is removed, the deployment moves on and creates another new pod. The same process repeats where the system waits for the liveness and readiness checks to pass. Once the new container is ready, traffic is directed to it and another old pod is scheduled for deletion. This cycle continues until all replicas are running version two. The deployment is also highly configurable. You can change how many containers are updated at a time or how long to wait between updates. You can even choose to add extra containers during the process to maintain capacity. This gives you a powerful way to manage application upgrades with zero downtime in your cluster.
Day 5 Translate
| Japanese | English |
|---|---|
| 最初の古いポッドが削除された後、デプロイメントは次に進み、別の新しいポッドを作成します | After the first old pod is removed, the deployment moves on and creates another new pod. |
| システムがライブネスチェックとレディネスチェックの合格を待つという同じプロセスが繰り返されます | The same process repeats where the system waits for the liveness and readiness checks to pass. |
| 新しいコンテナの準備ができると、トラフィックがそこに向けられ、別の古いポッドの削除がスケジュールされます | Once the new container is ready, traffic is directed to it and another old pod is scheduled for deletion. |
| デプロイメントはまた、高度に設定可能です | The deployment is also highly configurable. |
| 一度に更新されるコンテナの数や、更新間の待機時間を変更できます | You can change how many containers are updated at a time or how long to wait between updates. |
| これにより、クラスター内でゼロダウンタイムでアプリケーションのアップグレードを管理する強力な方法が提供されます | This gives you a powerful way to manage application upgrades with zero downtime in your cluster. |