Tuesday, December 25, 2018
How To Upload K8S Pods Logs Into OCI Object Storage via Fluentd
Please refer the github doc for details.
Sunday, December 23, 2018
How To Restart Fluentd Process in K8S Without Changing Pod Name
Requirement:
When we debug K8S apps running in Pods, we often delete pods to force restart apps in the Pods to reload configuration. For Statefulset , the name of Pod won't be changed. However for deployment and Daemonset, new pod names are generated after we delete the pods, it add some difficulty to track which pod we are debug the apps. We can find an easier way to restart apps in the pods while keeping same pod name.Solution:
We use fluentd as an example.- We update the config file of fluentd in the pod. ie update /etc/fluent/fluent.conf file
- We backup existing docker images via docker tag
docker tag k8s.gcr.io/fluentd-elasticsearch:v.2.0.4 k8s.gcr.io/fluentd-elasticsearch:backup
- We commit the changes on conf file into the docker images. Othewise all your changes will be lost after bounce. Use docker images|grep fluent to find correct name of K8S pod.
docker commit <full k8s pod name in docker> k8s.gcr.io/fluentd-elasticsearch:v.2.0.4
- Use kubectl exec -it <pod name> /bin/bash to get into the pod
- As ps is not installed by default in many pods standard. We can use "find" command to find which process fluentd is running.
find /proc -mindepth 2 -maxdepth 2 -name exe -exec ls -lh {} \; 2>/dev/null
- We can use kill to send signals to Fluentd process . see doc link . ie we send SIGHUP to ask process to reload the conf file. It is quite possible the fluend just restart and trigger pod bounce itself. It would be fine as we have committed changes in the docker images.
Kill -SIGHUP 8
- In this way, the pod name would be kepted, you can have the same pod name after pod bounce.
How to Enable Debug of Fluentd Daemonset in K8S
Requirement:
We have many K8S Pods running, like we have fluentd pod running on each worker node as Daemonset. We need to enable debug mode of fluentd to get more information in the logs.We have same requirements for all other applications running in the Pods. As long as the application accept the parameters to enable debug mode, or put more trace into log files, we should be able to enable it in K8S pods
Solution:
- First we need to find what parameters we can pass to application to enable debug. In fluentd, there are -v and -vv 2 parameters to enable debug information output of the fluentd. Please refer fluend office website .
- We need to get yaml file of the daemonset from kubectl. Same concept if it is deployment or statefulsets.
kubectl get pod -n <namespace> <daemonset name> -o yaml > /tmp/temp.yaml
- Edit this temp.yaml file and find the section which passes parameters to fluentd. In fluentd it is like
- name: FLUENTD_ARGS
value: --no-supervisor -q
- Update -q to to be -v or -vv , like
- name: FLUENTD_ARGS
value: --no-supervisor -vv
- Save the temp.yaml and apply it
kubectl apply -f /tmp/temp.yaml
- It won't be effective right away. The config will be stored in Etcd data store. When you delete pods, the new pods will read the latest config and start pods with -vv parameters .
- Then we can use kubectl logs -n devops <pod name> see debug infor of the pods.
Tuesday, December 18, 2018
How To Use Openssl Generate PEM files
PEM format Key Pair with Password Phase
openssl genrsa -des3 -out private.pem 2048 ( private key pem file)openssl rsa -in private.pem -outform PEM -pubout -out public.pem (public key pem file)
PEM format Key Pair without Password Phase
openssl genrsa -out private.pem 2048 ( private key pem file)openssl rsa -in private.pem -outform PEM -pubout -out public.pem (public key pem file)
How To Move Existing DB Docker Image To Kubernetes
- Requirement:
We have existing docker images for Oracle DB 18.3 which is running fine. Docker command is:docker run -itd --name livesql_testdb1 \
-p 1521:1521 -p 5501:5500 \
-e ORACLE_SID=LTEST \
-e ORACLE_PDB=ltestpdb \
-v /u03/LTEST/oradata:/opt/oracle/oradata \
-v /u03/ALTEST/oradata:/u02/app/oracle/oradata \
oracle/database:18.3v2
We need to move them to kubernetes cluster which is running on the same host.
Solution:
- Label nodes for nodeSelector usages
kubectl label nodes instance-cas-db2 dbhost=livesqlsb
kubectl label nodes instance-cas-mt2 mthost=livesqlsb
- To Create: kubectl create -f <yaml file>
- Create Peresistent Volumes DB Files storage. yaml is like
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-volume1
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/u03/LTEST/oradata"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-volume2
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/u03/ALTEST/oradata"
- Create Persistent Volumne Claim for DB file storage. yaml is like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-claim2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-claim1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- Create Service for DB to be accessed by other Apps in the K8S cluster. yaml is like
apiVersion: v1
kind: Service
metadata:
labels:
app: livesqlsb-db
name: livesqlsb-db-service
namespace: default
spec:
clusterIP: None
ports:
- port: 1521
protocol: TCP
targetPort: 1521
selector:
app: livesqlsb-db
- Create DB Pod in the K8S cluster. yaml is like
apiVersion: v1
kind: Pod
metadata:
name: livesqlsb-db
labels:
app: livesqlsb-db
spec:
volumes:
- name: livesqlsb-db-pv-storage1
persistentVolumeClaim:
claimName: livesql-pv-claim1
- name: livesqlsb-db-pv-storage2
persistentVolumeClaim:
claimName: livesql-pv-claim2
containers:
- image: oracle/database:18.3v2
name: livesqldb
ports:
- containerPort: 1521
name: livesqldb
volumeMounts:
- mountPath: /opt/oracle/oradata
name: livesqlsb-db-pv-storage1
- mountPath: /u02/app/oracle/oradata
name: livesqlsb-db-pv-storage2
env:
- name: ORACLE_SID
value: "LTEST"
- name: ORACLE_PDB
value: "ltestpdb"
nodeSelector:
dbhost: livesqlsb
Monday, December 17, 2018
How To Let K8S Pods Access Internet via Proxy
Requirement:
We have quite a few K8S Pods running for our Web and ETL services. We need to let the Pods to acccess files saved in OCI Object storage . The OCI object storage API endpoints are internet facing HTTP REST API. However the K8S cluster is running behind firewall. The worker nodes are access internet fine via proxy. The Pods have some difficultiesThe reason pods have difficulties to access internet is due to Pods have it's own DNS server. The nameserver of /etc/resolv.conf in Pods are based on K8S cluster , not worker node resolv.conf
Pods can't use worker node resolv.conf as it may cause conflicts for K8S internal DNS service which suppose to be in separated network.
But the good thing is it only due to the DNS service can't resolve IP of proxy server, IP addresses of proxy are pingable, we can use IP address for our Pods proxy settings .
Solution:
- $kubectl get pod
- $kubectl exec -it <pod name> /bin/bash
- ie Proxy server IP address is 123.123.123.123
- <pod name>$ export http_proxy=http://123.123.123.123:80
- <pod name>$ export https_proxy=http://123.123.123.123:80
- Then we can use OCI Cli or andy SDK to access OCI services on internet in the Pods
- Please remember the changes above are ephemeral. It will be lost after pods restart.
- We can add these commands in the dockerfile , or in the scripts to make sure internet are accessiable in the Pods
Wednesday, December 12, 2018
How To Proxy Any HTTP/HTTPS Services inside K8S Out To Local Laptop Browser
Requirement:
We have many HTTP/HTTPS services inside K8S, some of them are with Cluster IP, so it is inconvient to test and access them from local laptop. ie we have livesql sandbox service on it and it is with cluster ip. We would like to test the url from my local laptop browserSolution:
- Install kubectl in your local laptop, refer link and set kubconfig file .Please refer Oracle OCI official doc
- Find the service details you would like to proxy out . ie livesqldb-service in namespace 'default'
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
livesqlsb-service ClusterIP 10.108.248.63 <none> 8888/TCP
- Understand how kubectl proxy works . Please refer my other note
- Run kubectl proxy on your laptop
$kubectl proxy --port=8099 &
- Open local browser to access it livesqlsb-service . The url format is like http://localhost:8099/api/v1/namespaces/<namespace>/services/http:<service name>:/proxy/ . In our case, it is http://localhost:8099/api/v1/namespaces/default/services/http:livesqlsb-service:8888/proxy/ords/f?p=590:1000
- We can use same concept to proxy out any web services in the K8S to the local laptop browser.
How To Monitor Any Kubernetes Objects Status Live via Core Kubernetes API
Summary:
As core kubernetes APIs are implemented as HTTP interface, it provides ability to check and monitor any kubernetes objects live via curl . For example, we have a pod running ORDS for APEX. We can use curl to watch it via Kubernetes API . If the pod is deleted or fails, we can get some update on curl output. The concept applies to any objects in the K8S cluster. You can do the same for node,service, pv, pvc......etcWatch Node Steps:
- kubectl proxy --port=8080 & refer my other note
- curl http://127.0.0.1:8080/api/v1/nodes?watch=true
Watch Pod Steps:
- Get yaml output details of the pod you would like to watch
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
apexords-mt-deployment-7c7f95c954-5k7c5 1/1 Running 0 53m
apiVersion: v1
kind: Pod
metadata:
...........
controller: true
kind: ReplicaSet
name: apexords-mt-deployment-7c7f95c954
uid: bc88c05f-dcb0-11e8-9ee8-000017010a8f
resourceVersion: "5067431"
selfLink: /api/v1/namespaces/default/pods/apexords-mt-deployment-7c7f95c954-5k7c5
uid: aadc94f8-fe66-11e8-b83f-000017010a8f
spec:
containers:
.......
- Find resourceVersion : 5067431 which we will use in curl
- We need to proxy the Kubernetes API locally using kubectl proxy, we can discover the object we would like to watch apexords-mt-deployment-7c7f95c954-5k7c5 refer my other note
$kubectl proxy --port=8080 &
- Use curl to discover the object apexords-mt-deployment-7c7f95c954-5k7c5
curl http://127.0.0.1:8080/api/v1/namespaces/default/pods/apexords-mt-deplyment-7c7f95c954-5k7c5
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "apexords-mt-deployment-7c7f95c954-5k7c5",
"generateName": "apexords-mt-deployment-7c7f95c954-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/apexords-mt-deployment-7c7f95c954-5k7c5",
"uid": "aadc94f8-fe66-11e8-b83f-000017010a8f",
"resourceVersion": "5067431",
"creationTimestamp": "2018-12-12
- Use curl to watch the object apexords-mt-deployment-7c7f95c954-5k7c5
curl -f http://127.0.0.1:8080/api/v1/namespaces/default/pods?watch=true&resourceVersion=5067431
.........
<don't click enter, otherwise it exits>
- Open another session to delete this pod, see the a live update from API server via curl watch command
$kubectl delete pod apexords-mt-deployment-7c7f95c954-5k7c5
How To Proxy Remote Kubernetes Core APIs to Local Laptop Browser
Requirement:
Sometimes we need to check and verify Kubernetes Core APIs as well as the objects we created in K8S. It would be convenient for Admin to access the full list of Kubernetes Core APIs and Customer created objects from local laptop browser.Solution:
2 options:
Option 1:
- We can start kubectl proxy in remote K8S master or worker nodes where kubectl has been set and used to access K8S API server. ie (remote node) $ kubectl proxy --proxy=8080 &
- We can use ssh tunnel to access it from local laptop. I prefer git bash ssh command. Putty sometimes can't establish the connection.
- run below in git bash locally $ ssh -oIdentityFile=/d/OCI-VM-PrivateKey.txt -L 8080:127.0.0.1:8080 opc@<remote node ip address>
- Then we can access K8S API in your local browser : http://localhost:8080/apis
Option 2:
- We can start kubectl proxy locally in your laptop, no need ssh tunnel ie (local laptop) $ kubectl proxy --port=8080 &
- However you need to setup local kubectl to access remote K8S API. Things we need to pay attention to are below
- The firewall port is open from your local laptop to remote K8S API service. ie we have we have K8S API listen on port 6443, 6443 needs to be open
- Copy ~/.kube/config file to local laptop ~/.kube . This config file has crtitical key info, it should be put in safe place and not used by others. Local kubectl uses this config file to communicate with remote K8S API server. In theory , we can fully control the remote K8S cluster anywhere as long as we have this config file. Please refer official Oracle OCI doc
- Then we can access K8S API in your local browser : http://localhost:8080/apis
Monday, December 10, 2018
Sunday, December 09, 2018
How To Move Existing Ords Docker Containers To Kubernetes
Requirement:
We have existing docker images for ORDS which is running fine. Docker command is:docker run -itd --name apexords_test2 --network=mynetwork -p 7777:8888 -e livesqlsb_db_host=<ip address> oracle/apexords:v5
We need to move them to kubernetes cluster which is running on the same host.
Solution:
- Create Service for ORDS. yaml is like
apiVersion: v1
kind: Service
metadata:
labels:
name: apexords-service
name: apexords-service
spec:
ports:
- port: 7777
targetPort: 8888
nodePort: 30301
selector:
name: apexords-service
type: NodePort
- Create Pod for ORDS. yaml is like
apiVersion: v1Before moving into K8S, access url is http://<hostname>:7777/ords/
kind: Pod
metadata:
name: apexords
labels:
name: apexords-service
spec:
containers:
- name: apexords
image: oracle/apexords:v5
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8888
name: apexords
nodeSelector:
mthost: livesqldbsb
After moving into K8S, access url is http://<hostname>:30301/ords/
To Create: kubectl create -f <yaml file>
To Delete: kubectl delete -f <yaml file>
Saturday, December 08, 2018
How To Migrate Existing Https Certificate to Oracle OCI Loadbalancer
Requirement:
Sometimes when we do migration of our production services with https cerfiicates. We don't wanna a new domain for the service. So we need to move our https certificates to a new OCI load balancer environment. So we can keep the same https certificates for our servicesSolution:
Refer Oracle OCI official doc , we need below 4 information from existing https certificates before we can proceed- First 2 items: Certificate and Certificate Authority Certificate (CA certificate): Both are public, anyone can access them. There is certificate chain for these 2 items to bind CA for public security. We can easily get via openssl command. ie
- Private Key: When we got (bought) this certificate from CA Authority (in our case DigiCert ), we will be provided a private key to decrypt data from client. We need it to be put into OCI load balancer , so load balancer can decrypt incoming encrypted data
- Passphase : To make it safer, when the original creator submit the certificate request,there is passphase to attach to the certifcate. It will be confirmed on the OCI load balancer side before it can use the key-pair to exchange information.
PASSPHASE: original creator will have it
Once it is added, we can apply it on OCI load balancer services
Wednesday, December 05, 2018
How To Use Nginx Ingress To Rewrite Url in Kubernetes
Please click github doc link for details.
Sunday, December 02, 2018
How To Access Kube Proxy Dashboard Web UI via SSH Tunnel
Requirement:
We would like to acces web page of Kube Proxy UI which has better overview of the K8S cluster.However it runs on K8S host and listens on localhost port, not expose to outside host
Solution:
We can use ssh tunnel to access it. I prefer git bash ssh command. Putty sometimes can't establish the connection.run below into remote host
$ ssh -oIdentityFile=/d/OCI-VM-PrivateKey.txt -L 8001:127.0.0.1:8001 opc@<ip address>
access url in your local browser and login via token :
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
To get token, use below command:
# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token:
Subscribe to:
Posts (Atom)