k8s -> Orchestrating containerized apps
K8s cluster -> Master and Node. Similar to Football team. Team are the nodes and Manager is master
K8s Deployment -> We will have code and we will containerize it. This will be defined in an object called Deployment. This is done using YAML file.
Master -> Run in single server on top of Linux. There can be multiple master but complex to configure. It has kube-apiserver which is front-end to control plane, exposes rest-api. Cluster Store is the persistent storage and uses etcd. This is where state and config lives. It is the only stateful part of control plan and rest are stateless. ETCD is open source distributed key store value and kubernetes always look for it. Kube-controller-manager is the controller which controls all controllers. Kube-scheduler assigns work to node.
Nodes -> Node has kubelet (main agent, watches master api-server, reports to master in-case of failure, endpoint port is 10255, enpoints are /spec, /healthz, /pods), container runtime (container management usually docker or core os rocket)and kube-proxy (Use for networking, Makes sure each pods get IP, load balancing )
Declarative Model and Desired State -> Which means we give desired state similar to ansible. We just give manifest file to Kubernetes saying what our result should look like. Think if we are building kitchen we just say to contractor and he will do the rest. Suppose we ask K8s to run 3 x nginx node but for some reason one goes down, k8s will automatically run one more in existing node.
Pods -> K8s always run container inside pod. Pod will have network stack, volume and container use it. You only need to have two container in pod when the two container needs to share same resource. For example: Web Server and Log Scrapper, because log scrapper need to read the log from web server. Replication Controller sits above pod and Deployment sits above replication controller. Replication controller are used to deploy multiple replica of pod.
Services -> We can't rely on pod IP as the pod gets deleted and next time it spawn with new IP. Service will keep note of new pod's IP. It provides abstraction of multiple pods and also provides load balancing. Every pod will also get labels, it will only loadbalance based on the labels. If service has label PROD, WEB, V.14 and it will loadbalance only those has labels PROD, WEB, V.14. In short service provides stable IP and DNS names
Deployments -> REST objects, we write in YAML file and it is used by apiserver. Rolling updates and rollbacks can be done using deployment. In deployment we say what we want but we don't say what to do. Example: We say we need 4 replicas and it does all the task needed for maintain 4 nodes even if 1 node is down.
Installation:
AWS
we need kubeclt, kops and AWS CLI to install in AWS. Full access to EC2, Route53, S3, IAM and VPC
kops and kubectl were installed from curl. We need on s3 bucket and add the bucket name to the variable KOPS_STATE_STORE=s3://<bucket_name>
Manual Install -> Done using kubeadm
infra1@Hostname[DEV][~] $ sudo yum install docker.io kubeadm kubectl kubelet kubernetes-cni
kubeadm init -> To start cluster, this will give 3 commands, just execute it
Pod:
Contains one or more contains. Pod will get one IP even it has many containers. Every container inside a single pod will share same network namespace. All the pods will communicate using a pod network. Within the pod the containers communicate using localhost interface. We cannot use same port for two containers inside the pod. It is declared in manifest file (Json or YAML) and is sent to API server.
Sample POD YML:
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
labels:
zone: prod
version: v1
spec:
containers:
- name: hello-ctr
image: nigelpoulton/pluralsight-docker-ci:latest
ports:
- containerPort: 8080
We don't work directly with POD. Higher level of pod is replication controller. We always deploy replication controller which in-turn deploys the pod. Example YAML file below:
apiVersion: v1
kind: ReplicationController
metadata:
name: hello-rc
spec:
replicas: 5
Replication controller implements desired state and it is high level construct. It will make sure it always have 5 replicas. In YML spec will provide detail of replication and template will contain the pod spec.
Services:
Until now the only way we know to see the service is running is using "kubectl get pods" command. To access our app from outside of the cluster (like web page) or inside the cluster we use services. Services is REST object similar to pod and replication controller we use YML file. Service is our endpoint because pod will always get die and their IP changes. So services provide reliable IP, DNS, and PORT. Service will always make note of new pods that are replaced by dead ones. It uses labels to identify which pod to load balance.
Sample YML:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-world
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
protocol: TCP
selector:
app: hello-world
There are 3 service types we can use in spec: ClusterIP, NodePor, LoadBalancer
nodeport is optional.
Deployment:
Rolling update and rollback. When done using deployment when we make rolling update (Ie v1 to v2) replica set/ replication controller won't get deleted. However there won't be any pod inside the old version.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 10
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-pod
image: nigelpoul/pluralsight-docker-ci:latest
ports:
- containerPort: 8080
In above YML, if we see apiVersion it has beta1. Just v1 will not work
Updating Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 10
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-pod
image: nigelpoul/pluralsight-docker-ci:edge
ports:
- containerPort: 8080
Green highlighted are added.
minReadySeconds - Says wait for 10 seconds after each new pod comes up before you mark it as ready and move on to the next one.
Commands:
kubectl get nodes
kubectl get pods --all-namespaces -> Will show dns, etcd,kube-proxy details (if DNS is not ready run below)
kubectl apply --filename https://git.io/weave-kube-1.6
kubeadm join --token <token> <IP> -> To join kubernetes node to master
kubectl create -f pod.yml -> (To create a pod, replication controller and service)
kubectl get pods (Will show the list of pods with status)
kubectl get pods/hello-pod -> To get status of specific pod
kubectl describe pods ( will give the pod information, will also give status like Pending, also gives Lablels)
kubectl describe svc hello-svc (will give service information)
kubectl delete pods hello-pod
kubectl apply -f rc.yml (will apply the changes, like updating replica from 10 to 20)
kubectl expose rc hello-rc --name=hello-svc --target-port=8080 --type=NodePort (you will get message service "hello-svc" exposed) (Expose argument will create service object, rc is replication controller, we are creating a service hello-svc )
kubectl describe svc hello-svc (NodePort is the cluster wide port, we will use it in our webpage URL. By default service port will be between 30000 and 32767)
kubectl get svc (To get the services)
kubectl delete svc hello-svc (To delete services)
kubectl get ep (to get endpoint)
kubectl rolling-update -f update-rc.yml (Rolling update the replication controller, but not recommended use deployments)
kubectl create -f deploy.yml (Creating deployment)
kubectl describe deploy hello-deployment (gives description about deployment)
kubectl get rs (to get replica set. It will show how many replica we asked for DESIRED and how many is there CURRENT and READY)
kubectl describe rs (Describing replica set)
kubectl apply -f deploy.yml --record (For applying deployment update)
kubectl rollout status deployment hello-deploy (Run this after deployment to get the status)
kubectl get deploy hello-deploy ( Will show desired, current, up to date and available status)
kubectl rollout history deployment hello-deploy (To see deployment history)
kubectl get rs (To get replica set)
kubectl describe deploy hello-deploy (Run this to see whether the docker image we used is updated in the Image)
kubectl rollout undo deployment hello-deploy --to-revision=1 (IT will roll back to older version)
Subscribe to:
Post Comments (Atom)
Golang - Email - Secure code warrior
package mail import ( "net/smtp" "gobin/config" ) var ( emailConfig config.Email ) type Mail struct { Destinati...
-
Essay Types: Opinion: Remember as EOD Opinion/Discussion Situation Discussion To what ex...
-
#Imperatively working with your cluster. Run will "generate" a Deployment by default. #This is pulling a specified image from Goog...
-
In some countries, a few people *earn extremely high salaries*. Some people think that this is *good for a country*, while others believe th...
No comments:
Post a Comment