Tuesday, December 25, 2018
How To Upload K8S Pods Logs Into OCI Object Storage via Fluentd
Please refer the github doc for details.
Sunday, December 23, 2018
How To Restart Fluentd Process in K8S Without Changing Pod Name
Requirement:
When we debug K8S apps running in Pods, we often delete pods to force restart apps in the Pods to reload configuration. For Statefulset , the name of Pod won't be changed. However for deployment and Daemonset, new pod names are generated after we delete the pods, it add some difficulty to track which pod we are debug the apps. We can find an easier way to restart apps in the pods while keeping same pod name.Solution:
We use fluentd as an example.- We update the config file of fluentd in the pod. ie update /etc/fluent/fluent.conf file
- We backup existing docker images via docker tag
docker tag k8s.gcr.io/fluentd-elasticsearch:v.2.0.4 k8s.gcr.io/fluentd-elasticsearch:backup
- We commit the changes on conf file into the docker images. Othewise all your changes will be lost after bounce. Use docker images|grep fluent to find correct name of K8S pod.
docker commit <full k8s pod name in docker> k8s.gcr.io/fluentd-elasticsearch:v.2.0.4
- Use kubectl exec -it <pod name> /bin/bash to get into the pod
- As ps is not installed by default in many pods standard. We can use "find" command to find which process fluentd is running.
find /proc -mindepth 2 -maxdepth 2 -name exe -exec ls -lh {} \; 2>/dev/null
- We can use kill to send signals to Fluentd process . see doc link . ie we send SIGHUP to ask process to reload the conf file. It is quite possible the fluend just restart and trigger pod bounce itself. It would be fine as we have committed changes in the docker images.
Kill -SIGHUP 8
- In this way, the pod name would be kepted, you can have the same pod name after pod bounce.
How to Enable Debug of Fluentd Daemonset in K8S
Requirement:
We have many K8S Pods running, like we have fluentd pod running on each worker node as Daemonset. We need to enable debug mode of fluentd to get more information in the logs.We have same requirements for all other applications running in the Pods. As long as the application accept the parameters to enable debug mode, or put more trace into log files, we should be able to enable it in K8S pods
Solution:
- First we need to find what parameters we can pass to application to enable debug. In fluentd, there are -v and -vv 2 parameters to enable debug information output of the fluentd. Please refer fluend office website .
- We need to get yaml file of the daemonset from kubectl. Same concept if it is deployment or statefulsets.
kubectl get pod -n <namespace> <daemonset name> -o yaml > /tmp/temp.yaml
- Edit this temp.yaml file and find the section which passes parameters to fluentd. In fluentd it is like
- name: FLUENTD_ARGS
value: --no-supervisor -q
- Update -q to to be -v or -vv , like
- name: FLUENTD_ARGS
value: --no-supervisor -vv
- Save the temp.yaml and apply it
kubectl apply -f /tmp/temp.yaml
- It won't be effective right away. The config will be stored in Etcd data store. When you delete pods, the new pods will read the latest config and start pods with -vv parameters .
- Then we can use kubectl logs -n devops <pod name> see debug infor of the pods.
Tuesday, December 18, 2018
How To Use Openssl Generate PEM files
PEM format Key Pair with Password Phase
openssl genrsa -des3 -out private.pem 2048 ( private key pem file)openssl rsa -in private.pem -outform PEM -pubout -out public.pem (public key pem file)
PEM format Key Pair without Password Phase
openssl genrsa -out private.pem 2048 ( private key pem file)openssl rsa -in private.pem -outform PEM -pubout -out public.pem (public key pem file)
How To Move Existing DB Docker Image To Kubernetes
- Requirement:
We have existing docker images for Oracle DB 18.3 which is running fine. Docker command is:docker run -itd --name livesql_testdb1 \
-p 1521:1521 -p 5501:5500 \
-e ORACLE_SID=LTEST \
-e ORACLE_PDB=ltestpdb \
-v /u03/LTEST/oradata:/opt/oracle/oradata \
-v /u03/ALTEST/oradata:/u02/app/oracle/oradata \
oracle/database:18.3v2
We need to move them to kubernetes cluster which is running on the same host.
Solution:
- Label nodes for nodeSelector usages
kubectl label nodes instance-cas-db2 dbhost=livesqlsb
kubectl label nodes instance-cas-mt2 mthost=livesqlsb
- To Create: kubectl create -f <yaml file>
- Create Peresistent Volumes DB Files storage. yaml is like
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-volume1
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/u03/LTEST/oradata"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-volume2
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/u03/ALTEST/oradata"
- Create Persistent Volumne Claim for DB file storage. yaml is like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-claim2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-claim1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- Create Service for DB to be accessed by other Apps in the K8S cluster. yaml is like
apiVersion: v1
kind: Service
metadata:
labels:
app: livesqlsb-db
name: livesqlsb-db-service
namespace: default
spec:
clusterIP: None
ports:
- port: 1521
protocol: TCP
targetPort: 1521
selector:
app: livesqlsb-db
- Create DB Pod in the K8S cluster. yaml is like
apiVersion: v1
kind: Pod
metadata:
name: livesqlsb-db
labels:
app: livesqlsb-db
spec:
volumes:
- name: livesqlsb-db-pv-storage1
persistentVolumeClaim:
claimName: livesql-pv-claim1
- name: livesqlsb-db-pv-storage2
persistentVolumeClaim:
claimName: livesql-pv-claim2
containers:
- image: oracle/database:18.3v2
name: livesqldb
ports:
- containerPort: 1521
name: livesqldb
volumeMounts:
- mountPath: /opt/oracle/oradata
name: livesqlsb-db-pv-storage1
- mountPath: /u02/app/oracle/oradata
name: livesqlsb-db-pv-storage2
env:
- name: ORACLE_SID
value: "LTEST"
- name: ORACLE_PDB
value: "ltestpdb"
nodeSelector:
dbhost: livesqlsb
Monday, December 17, 2018
How To Let K8S Pods Access Internet via Proxy
Requirement:
We have quite a few K8S Pods running for our Web and ETL services. We need to let the Pods to acccess files saved in OCI Object storage . The OCI object storage API endpoints are internet facing HTTP REST API. However the K8S cluster is running behind firewall. The worker nodes are access internet fine via proxy. The Pods have some difficultiesThe reason pods have difficulties to access internet is due to Pods have it's own DNS server. The nameserver of /etc/resolv.conf in Pods are based on K8S cluster , not worker node resolv.conf
Pods can't use worker node resolv.conf as it may cause conflicts for K8S internal DNS service which suppose to be in separated network.
But the good thing is it only due to the DNS service can't resolve IP of proxy server, IP addresses of proxy are pingable, we can use IP address for our Pods proxy settings .
Solution:
- $kubectl get pod
- $kubectl exec -it <pod name> /bin/bash
- ie Proxy server IP address is 123.123.123.123
- <pod name>$ export http_proxy=http://123.123.123.123:80
- <pod name>$ export https_proxy=http://123.123.123.123:80
- Then we can use OCI Cli or andy SDK to access OCI services on internet in the Pods
- Please remember the changes above are ephemeral. It will be lost after pods restart.
- We can add these commands in the dockerfile , or in the scripts to make sure internet are accessiable in the Pods
Wednesday, December 12, 2018
How To Proxy Any HTTP/HTTPS Services inside K8S Out To Local Laptop Browser
Requirement:
We have many HTTP/HTTPS services inside K8S, some of them are with Cluster IP, so it is inconvient to test and access them from local laptop. ie we have livesql sandbox service on it and it is with cluster ip. We would like to test the url from my local laptop browserSolution:
- Install kubectl in your local laptop, refer link and set kubconfig file .Please refer Oracle OCI official doc
- Find the service details you would like to proxy out . ie livesqldb-service in namespace 'default'
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
livesqlsb-service ClusterIP 10.108.248.63 <none> 8888/TCP
- Understand how kubectl proxy works . Please refer my other note
- Run kubectl proxy on your laptop
$kubectl proxy --port=8099 &
- Open local browser to access it livesqlsb-service . The url format is like http://localhost:8099/api/v1/namespaces/<namespace>/services/http:<service name>:/proxy/ . In our case, it is http://localhost:8099/api/v1/namespaces/default/services/http:livesqlsb-service:8888/proxy/ords/f?p=590:1000
- We can use same concept to proxy out any web services in the K8S to the local laptop browser.
How To Monitor Any Kubernetes Objects Status Live via Core Kubernetes API
Summary:
As core kubernetes APIs are implemented as HTTP interface, it provides ability to check and monitor any kubernetes objects live via curl . For example, we have a pod running ORDS for APEX. We can use curl to watch it via Kubernetes API . If the pod is deleted or fails, we can get some update on curl output. The concept applies to any objects in the K8S cluster. You can do the same for node,service, pv, pvc......etcWatch Node Steps:
- kubectl proxy --port=8080 & refer my other note
- curl http://127.0.0.1:8080/api/v1/nodes?watch=true
Watch Pod Steps:
- Get yaml output details of the pod you would like to watch
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
apexords-mt-deployment-7c7f95c954-5k7c5 1/1 Running 0 53m
apiVersion: v1
kind: Pod
metadata:
...........
controller: true
kind: ReplicaSet
name: apexords-mt-deployment-7c7f95c954
uid: bc88c05f-dcb0-11e8-9ee8-000017010a8f
resourceVersion: "5067431"
selfLink: /api/v1/namespaces/default/pods/apexords-mt-deployment-7c7f95c954-5k7c5
uid: aadc94f8-fe66-11e8-b83f-000017010a8f
spec:
containers:
.......
- Find resourceVersion : 5067431 which we will use in curl
- We need to proxy the Kubernetes API locally using kubectl proxy, we can discover the object we would like to watch apexords-mt-deployment-7c7f95c954-5k7c5 refer my other note
$kubectl proxy --port=8080 &
- Use curl to discover the object apexords-mt-deployment-7c7f95c954-5k7c5
curl http://127.0.0.1:8080/api/v1/namespaces/default/pods/apexords-mt-deplyment-7c7f95c954-5k7c5
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "apexords-mt-deployment-7c7f95c954-5k7c5",
"generateName": "apexords-mt-deployment-7c7f95c954-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/apexords-mt-deployment-7c7f95c954-5k7c5",
"uid": "aadc94f8-fe66-11e8-b83f-000017010a8f",
"resourceVersion": "5067431",
"creationTimestamp": "2018-12-12
- Use curl to watch the object apexords-mt-deployment-7c7f95c954-5k7c5
curl -f http://127.0.0.1:8080/api/v1/namespaces/default/pods?watch=true&resourceVersion=5067431
.........
<don't click enter, otherwise it exits>
- Open another session to delete this pod, see the a live update from API server via curl watch command
$kubectl delete pod apexords-mt-deployment-7c7f95c954-5k7c5
How To Proxy Remote Kubernetes Core APIs to Local Laptop Browser
Requirement:
Sometimes we need to check and verify Kubernetes Core APIs as well as the objects we created in K8S. It would be convenient for Admin to access the full list of Kubernetes Core APIs and Customer created objects from local laptop browser.Solution:
2 options:
Option 1:
- We can start kubectl proxy in remote K8S master or worker nodes where kubectl has been set and used to access K8S API server. ie (remote node) $ kubectl proxy --proxy=8080 &
- We can use ssh tunnel to access it from local laptop. I prefer git bash ssh command. Putty sometimes can't establish the connection.
- run below in git bash locally $ ssh -oIdentityFile=/d/OCI-VM-PrivateKey.txt -L 8080:127.0.0.1:8080 opc@<remote node ip address>
- Then we can access K8S API in your local browser : http://localhost:8080/apis
Option 2:
- We can start kubectl proxy locally in your laptop, no need ssh tunnel ie (local laptop) $ kubectl proxy --port=8080 &
- However you need to setup local kubectl to access remote K8S API. Things we need to pay attention to are below
- The firewall port is open from your local laptop to remote K8S API service. ie we have we have K8S API listen on port 6443, 6443 needs to be open
- Copy ~/.kube/config file to local laptop ~/.kube . This config file has crtitical key info, it should be put in safe place and not used by others. Local kubectl uses this config file to communicate with remote K8S API server. In theory , we can fully control the remote K8S cluster anywhere as long as we have this config file. Please refer official Oracle OCI doc
- Then we can access K8S API in your local browser : http://localhost:8080/apis
Monday, December 10, 2018
Sunday, December 09, 2018
How To Move Existing Ords Docker Containers To Kubernetes
Requirement:
We have existing docker images for ORDS which is running fine. Docker command is:docker run -itd --name apexords_test2 --network=mynetwork -p 7777:8888 -e livesqlsb_db_host=<ip address> oracle/apexords:v5
We need to move them to kubernetes cluster which is running on the same host.
Solution:
- Create Service for ORDS. yaml is like
apiVersion: v1
kind: Service
metadata:
labels:
name: apexords-service
name: apexords-service
spec:
ports:
- port: 7777
targetPort: 8888
nodePort: 30301
selector:
name: apexords-service
type: NodePort
- Create Pod for ORDS. yaml is like
apiVersion: v1Before moving into K8S, access url is http://<hostname>:7777/ords/
kind: Pod
metadata:
name: apexords
labels:
name: apexords-service
spec:
containers:
- name: apexords
image: oracle/apexords:v5
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8888
name: apexords
nodeSelector:
mthost: livesqldbsb
After moving into K8S, access url is http://<hostname>:30301/ords/
To Create: kubectl create -f <yaml file>
To Delete: kubectl delete -f <yaml file>
Saturday, December 08, 2018
How To Migrate Existing Https Certificate to Oracle OCI Loadbalancer
Requirement:
Sometimes when we do migration of our production services with https cerfiicates. We don't wanna a new domain for the service. So we need to move our https certificates to a new OCI load balancer environment. So we can keep the same https certificates for our servicesSolution:
Refer Oracle OCI official doc , we need below 4 information from existing https certificates before we can proceed- First 2 items: Certificate and Certificate Authority Certificate (CA certificate): Both are public, anyone can access them. There is certificate chain for these 2 items to bind CA for public security. We can easily get via openssl command. ie
- Private Key: When we got (bought) this certificate from CA Authority (in our case DigiCert ), we will be provided a private key to decrypt data from client. We need it to be put into OCI load balancer , so load balancer can decrypt incoming encrypted data
- Passphase : To make it safer, when the original creator submit the certificate request,there is passphase to attach to the certifcate. It will be confirmed on the OCI load balancer side before it can use the key-pair to exchange information.
PASSPHASE: original creator will have it
Once it is added, we can apply it on OCI load balancer services
Wednesday, December 05, 2018
How To Use Nginx Ingress To Rewrite Url in Kubernetes
Please click github doc link for details.
Sunday, December 02, 2018
How To Access Kube Proxy Dashboard Web UI via SSH Tunnel
Requirement:
We would like to acces web page of Kube Proxy UI which has better overview of the K8S cluster.However it runs on K8S host and listens on localhost port, not expose to outside host
Solution:
We can use ssh tunnel to access it. I prefer git bash ssh command. Putty sometimes can't establish the connection.run below into remote host
$ ssh -oIdentityFile=/d/OCI-VM-PrivateKey.txt -L 8001:127.0.0.1:8001 opc@<ip address>
access url in your local browser and login via token :
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
To get token, use below command:
# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token:
Thursday, November 29, 2018
How To Setup Sending Monitoring Emails via OCI Email Delivery Service
Requirement:
We often use scripts or program to send monitoring emails from linux to engineers. We plan to use mailx to send emails via smtp services provided by OCI Email Delivery ServiceSolution:
We followed the instructions of the official doc and setup smtp credential and smtp connections
We need to get SSL/TLS CA details from OCI email smtp hosts as we must secure the email connections
- mkdir /etc/certs
- # certutil -N -d /etc/certs
- To get smtp domain CA details ,run this
- if it is on ashburon: openssl s_client -showcerts -connect smtp.us-ashburn-1.oraclecloud.com:587 -starttls smtp > /etc/certs/mycerts-ashburn
- if it is on phoenix : openssl s_client -showcerts -connect smtp.us-phoenix-1.oraclecloud.com:587 -starttls smtp > /etc/certs/mycerts -phoenix
- Vi mycerts-ashburn or phoenix and copy each certificate including the --BEGIN CERTIFICATE-- and --END CERTIFICATE-- and paste them into their respective files. ie: ocismtp-ashburn1.pem ocismtp-ashburn2.pem
- Import them into the nss-config-dr /etc/certs via below commands
- certutil -A -n "DigiCert SHA2 Secure Server CA" -t "TC,," -d /etc/certs -i /etc/certs/ocismtp-ashburn1.pem
- certutil -A -n "DigiCert SHA2 Secure Server CA smtp" -t "TC,," -d /etc/certs -i /etc/certs/ocismtp-ashburn2.pem
- use certutil -L -d /etc/certs to verify they are imported well. output would like
# certutil -L -d /etc/certs
Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI
DigiCert SHA2 Secure Server CA CT,,
DigiCert SHA2 Secure Server CA smtp CT,,
- Add below config at the bottom of /etc/mail.rc
set nss-config-dir=/etc/certs
set smtp-use-starttls
set smtp-auth=plain
set smtp=smtp.us-ashburn-1.oraclecloud.com
set from="no-reply@test.com(henryxie)"
set smtp-auth-user="<ocid from smtp credentials doc >"
set smtp-auth-password="<password from smtp credentials doc >"
- run test command:
Wednesday, November 28, 2018
OCI Email Delivery smtp-server: 504 The requested authentication mechanism is not supported
Symptom:
We plan to use mailx in Oracle Linux 7.6 VM to send emails via smtp services provided by OCI Email Delivery ServiceWe followed the instructions of the official doc and get smtp credential and smtp connections setup
When we run this command:
echo "test test from henry" | mailx -v -s "test test test" \
-S nss-config-dir=/etc/certs \
-S smtp-use-starttls \
-S smtp-auth=login \
-S smtp=smtp.us-ashburn-1.oraclecloud.com \
-S from="no-reply@test.com(henryxie)" \
-S smtp-auth-user="<ocid from smtp credentials doc >" \
-S smtp-auth-password="<password from smtp credentials doc >" henry.xie@oracle.com
We get error
smtp-server: 504 The requested authentication mechanism is not supported
Solution:
Change smtp-auth=login --> smtp-auth=plainLater OCI email delivery will support smtp-auth=login
Tuesday, November 27, 2018
OCI Email Delivery gives “Error in certificate: Peer's certificate issuer is not recognized.”
Symptom:
We plan to use mailx in Oracle Linux 7.6 VM to send emails via smtp services provided by OCI Email Delivery ServiceWe followed the instructions of the official doc and get smtp credential and smtp connections setup
When we run this command:
echo "test test from henry" | mailx -v -s "test test test" \
-S nss-config-dir=/etc/certs \
-S smtp-use-starttls \
-S smtp-auth=plain \
-S smtp=smtp.us-ashburn-1.oraclecloud.com \
-S from="no-reply@test.com(henryxie)" \
-S smtp-auth-user="<ocid from smtp credentials doc >" \
-S smtp-auth-password="<password from smtp credentials doc>" henry.xie@oracle.com
We get error
“Error in certificate: Peer's certificate issuer is not recognized.”
Solution:
The reason is due to nss-config-dir has not included the CA publisher of the smtp.us-ashburn-1.oraclecloud.com . We need to add them into the nss-config-dir- To get details of CA details ,run this
- openssl s_client -showcerts -connect smtp.us-ashburn-1.oraclecloud.com:587 -starttls smtp > /etc/certs/mycerts
- Vi mycerts and copy each certificate including the --BEGIN CERTIFICATE-- and --END CERTIFICATE-- and paste them into their respective files. ie: ocismtp-ashburn1.pem ocismtp-ashburn2.pem
- Import them into the nss-config-dr /etc/certs via below commands
- certutil -A -n "DigiCert SHA2 Secure Server CA" -t "TC,," -d /etc/certs -i /etc/certs/ocismtp-ashburn1.pem
- certutil -A -n "DigiCert SHA2 Secure Server CA smtp" -t "TC,," -d /etc/certs -i /etc/certs/ocismtp-ashburn2.pem
- use certutil -L -d /etc/certs to verify they are imported well
The error should be gone
Monday, November 26, 2018
Kubectl: Unable to connect to the server: x509: certificate is valid for ..... , not ..... in K8S
Symptom:
When we setup kubectl on local workstation to access remote Kubernete Cluster. The remote public IP of K8S API server access point is 52.64.132.188. Port 6443 is open. We obtain the ca.pem file locally and run below to generete kubeconfig file locally.kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://52.64.132.188:6443
After that, we try to run kubectl get node but get Unable to connect to the server: x509: certificate error. Details like
$ kubectl get nodeUnable to connect to the server: x509: certificate is valid for 10.32.0.1, 172.31.44.176, 172.31.2.170, 172.31.3.17, 127.0.0.1, not 52.64.132.188
Diagnosis:
The reason of "Unable to connect to the server: x509: certificate is valid for ..... , not ....." is quite likely the K8S API server does not have "52.64.132.188" in its CA authority host list. We need to go back and check what cert hosts were added into the kubernetes.pem when K8S cluster was initiated.
In my case, I ran
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1, 172.31.44.176, 172.31.2.170, 172.31.3.17, 127.0.0.1,test.testdomain.com \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
I used test.testdomain.com not ip address "52.64.132.188" because public ip can be changed later.K8S CA has "test.testdomain.com" in the CA list, not ip address. That is the reason why K8S API server does not think "52.64.132.188" is a valid client to access the API.
Solution:
To solve it, we need to update our local kubeconfig file to use test.testdomain.com not IP address.kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://test.testdomain.com:6443
Wednesday, November 21, 2018
The Easy Way To Let Kubernetes Master Node To Run Pods
Symptom:
By default, as Kubernetes master node has quite heavy admin load, so it normally does not run other work pods. However when we don't have many nodes, so we would like to let master node to run some workload too ,specially in Dev and Stage enviroments.Solution:
There are a few ways to do that. The easy way is to remove the taint of the master node.Default, master has taint like this:
kubectl describe node <master node> |grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
We remove it via kubectl
kubectl taint nodes <master node> node-role.kubernetes.io/master-
node "<master node>" untainted
or
kubectl taint nodes <master node> node-role.kubernetes.io:NoSchedule-
node "<master node>" untainted
When we scale up pods, some of them will run on master node
We can add it back via kubectl
kubectl taint nodes<master node> node-role.kubernetes.io=master:NoSchedule
node "<master node>" tainted
We can use taint to prevent pod schedules on normal worker node as well.
kubectl taint nodes <node> key=value:NoSchedule
Saturday, November 17, 2018
Proxy Examples For Ssh,Ssh Tunnel, Sftp Kubectl To Access Internet via Git Bash
Requirement:
In company intranet, workstations are behind firewall. We use Git bash, we need to use ssh, sftp ,kubectl access internet via proxy server in Git bashSolution:
Set env variables for your local proxy servers for kubectl
$export http_proxy=http://www-proxy.us.test.com:80/$export https_proxy=http://www-proxy.us.test.com:80/
$kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://test.testdomain.com:6443
$kubectl get node
ssh with proxy and keeplive
$ ssh -o ServerAliveInterval=5 -o ProxyCommand="connect -H www-proxy.us.test.com:80 %h %p" user@<public ip address or domain name>ssh tunnel with private key
$ ssh -oIdentityFile=/d/OCI-VM-PrivateKey.txt -L 8001:127.0.0.1:8001 opc@<ip address>ssh tunnel with proxy keeplive parameter
$ ssh -L 6443:localhost:6443 -o ServerAliveInterval=5 -o ProxyCommand="connect -H www-proxy.us.test.com:80 %h %p" user@<public ip address or domain name>sftp with proxy
$ sftp -o ProxyCommand="connect -H www-proxy.us.test.com:80 %h %p" user@<public ip address or domain name>Wednesday, November 14, 2018
How To Add Worker Node Across Region in same K8S in OCI
Requirement:
We would like to spread our kubernetes workload across region. So we can have safer DR solution for our services. ie we have worker nodes in phoenix of OCI, we would like to add new worker nodes in ashburn of OCI within the same tenancy and the same kubernetes cluster. This wiki is based on oracle provided kubernete and container service see official doc .Solution:
The main part is on firewall side between the 2 regions. As long as the ports are open among nodes for kubernetes own communication and services of pods. It would be fine. The network we use flannel which is on VXLAN.Once firewall ports are open, refer this blog to add a new worker node
Firewall Part :
Kubernetes own communctions between the 2 regions
All the worker nodes in the clusters should open "ports: 10250 8472" to be able to receive connections
Source: All the nodes
Destination : worker nodes
port: TCP: 10250 UDP:8472
All Master nodes should open "port : 6443" (API server) to be able to receive connections
Source: All worker nodes and End users Program to access API server
Destation : Master nodes
port: 6443
All Etcd nodes should open "port : 2379 " (etcd service) to be able to receive connections
Source: All the nodes ,
Destation : Etcd nodes
port: 2379
All services ports need to be exposed to outside kubernetes
Source: 0.0.0.0 or restricted users depends on what services
Destation : All the worker nodes
port: the ports to be exposed
All services ports need to be exposed to outside kubernetes
Source: 0.0.0.0 or restricted users depends on what services
Destation : All the worker nodes
port: the ports to be exposed
Access K8S Pod Service Port via Kubectl Port-forward on Remote Workstation
Requirement:
We would like to access the service of a Pod from a remote desktop. ie there is pod running nginx with port 80 in Oracle OCI K8S. We would like to access it on local windows 10 desktop.We can use kubectl port forward via internet. The workstation can be in company intranet behind firewall. As long as "kubectl get nodes" (kubectl can access API server) works via proxy or ssh tunnel, we can use our local workstation to access the remote pod. This command can be very useful in troubleshooting scenarios
Solution:
$ kubectl port-forward <POD_NAME > 8081:80Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80
Open a new window
curl --head http://127.0.0.1:8081
We should get page from the POD
DispatcherNotFoundException 404 Error on ORDS Standalone and APEX
Symptom:
After we install APEX 18.1 and ORDS 18.2 , we get 404 error on Browser , Error stack is like:DispatcherNotFoundException [statusCode=404, reasons=[]] at oracle.dbtools.http.entrypoint.Dispatcher.choose(Dispatcher.java:87)
Diagnosis:
There are many reasons for that. One of reasons we hit is that the APEX_LISTENER APEX_PUBLIC_USER APEX_REST_PUBLIC_USER have not been setup correctly when we install ORDS.war .The java -jar $ORDS_HOME/ords.war install advanced
The process will read ORDShome/ords/conf/*.xml , try to figure out the existing settings for the any new installation. It will skip apex listener setup if there are old settings there. Thus skip generating the xml files for each connections to DB.
So in ords/conf/ , there should be 2 - 4 xml files .Each file define a connection pool to Database. If you only see 1 xml, it means apex listeners settings are missing.
Solution:
Remove ords_params.properties and *.xml in ords/conf and remove standalone.properties in ords/standaloneRerun java -jar $ORDS_HOME/ords.war install advanced
Or java -jar $ORDS_HOME/ords.war install simple --- with correct parameter file
Sunday, November 11, 2018
Thursday, November 08, 2018
Useful Urls To Get Prometheus Settings
To get targets status:
http://<ip address>:<port>/targetsie: http://1.1.1.1:30304/targets
To get prometheus startup parameters
http://<ip address>:<port>/flagsie: http://1.1.1.1:30304/flags
Wednesday, November 07, 2018
How To Fix "server returned HTTP status 403 Forbidden" in Prometheus
Requirement:
We installed and started Prometheus, however we can't get node metrics. via /targets , we find the error " server returned HTTP status 403 Forbidden "Solution:
The Prometheus /targets page will show the kubelet job with the error 403 Unauthorized, when token authentication is not enabled. Ensure, that the --authentication-token-webhook=true flag is enabled on all kubelet configurations.We need to enable --authentication-token-webhook=true in our kubelet conf
In the Host OS:
cd /etc/systemd/system/kubelet.service.d
vi 10-kubeadm.conf
Add "--authentication-token-webhook=true" into "KUBELET_AUTHZ_ARGS"
After that, it would be like
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --authentication-token-webhook=true"
# systemctl daemon-reload
# systemctl restart kubelet.service
403 error should be gone. More details Refer github doc
How To Fix "No Route to Host" in Prometheus node-exporter
Requirement:
We installed and started Prometheus, however we can't get node-exporter metrics. via /targets , we find the error " ... no route to host "Solution:
The error means Prometheus can't reach http endpoint http://<ip address>:9100/metricsFirst test localhost if it is working on the node
Login Node:
#wget -O- localhost:9100/metrics
If you get output, it means endpoint is working fine. Otherwise check prometheus pod and logs
Then test from the other node
Login other Node:
#wget -O- <ip address>:9100/metrics
If you can't get output, means there are some network or firewall issues
* check the your cloud provider and network security settings, make sure port 9100 is open
* check Node linux firewall service settings. In EL7, default port 9100 is not open
#firewall-cmd --add-port=9100/tcp --permanent
# systemctl restart firewalld
Monday, November 05, 2018
How To Remotely Sqlplus Expdp Impdp Oracle DB In Kubernetes
See details in Github link
Friday, November 02, 2018
Issue with Makefile:6: *** missing separator.
Symptom:
When you run make , you got error below(oracle-svi5TViy) $make
Makefile:6: *** missing separator. Stop.
Makfile Details:
.PHONY: default install test
default: test
install:
pipenv install --dev --skip-lock
test:
PYTHONPATH=./src pytest
Solution:
The Makefile format uses <tab> not <space> to indent. As they are invisble, easy to overlook.
To fix it, replace the <space> before pipenv and PYTHONPATH with <tab>
Tuesday, October 30, 2018
Where To Modify Starting Arguments of K8S Core Components
Requirement:
We have a kubernetes cluster running on docker images. Details refer github doc for Oracle K8S manual installation . Sometimes we need to add/modify starting arguments of kubernetes core components. ie we need to add TaintBasedEvictions=true for kube-controller-manager component to enable alpha feature. Or we need to add argument for etcd component.Solution:
By default, the manifests files are in master node in /etc/kubernetes/manifests .You would see these 4 yaml files. Backup the files and do the changes and restart kubernetes cluster.etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
Where To Find kube-controller-manager Cmd from Kubernetes Cluster
Requirement:
We need to find kube-controller-manager file from Kubernetes Cluster which is running on docker images. refer github doc for Oracle K8S manual installationAll core binaries are stored inside the docker images.
Solution:
Find the controller docker instance
#docker ps |grep controller7fc8302ccfe3 887b8144f94f "kube-controller-man…" 15 hours ago Up 15 hours k8s_kube-controller-manager_kube-controller-manager-instance-cas-mt2_kube-system_d739245871cdd71020650b11ac854d60_0
docker exec into the docker instance and find kube-controller-manager
#docker exec -it k8s_kube-controller-manager_kube-controller-manager-instance-cas-mt2_kube-system_d739245871cdd71020650b11ac854d60_0 /bin/bash
bash-4.2# which kube-controller-manager
/usr/local/bin/kube-controller-manager
Use docker cp to copy the file out of docker instance .
#docker cp k8s_kube-controller-manager_kube-controller-manager-instance-cas-mt2_kube-system_d739245871cdd71020650b11ac854d60_0:/usr/local/bin/kube-controller-manager /bin/Sunday, October 28, 2018
Saturday, October 27, 2018
How To Run Tcpdump With Logs Rotating
Requirement:
We need to get tcp traffic on busy systems to diagnose the network related issues. Tcpdump is a great tool but it also dumps huge amount of data which fill up disk easily.Solution:
tcpdump has rotation built in. Use below command:-C 8000*1,000,000 byet --> around 8G each file size
-W total 9 files to keep
nohup tcpdump -i bond0 -C 8000 -W 9 port 5801 -w tcpdump-$(hostname -s).pcap -Z root &
Tuesday, October 23, 2018
Turn Off Checksum Offload For K8S with Oracle UEK4 Kernel
Symptom:
We create K8S via Oracle Doc in Oracle OCI. mysql server, service, phpadmin server ,service are created fine. However we have problems that Pods can't communicate with other Pods. We created a debug container (refer blog here )with network tools to attach the network stack of phpadmin pod. We find we can't access the port , nc -vz <ip> 3306 is timing out, however ping <mysql ip> is fineSolution:
Dive deeper , we see docker0 network interface (ip addr) has its orginal IP address (172.17.*.* ), it does not have flannel network ip address we created when we init K8S (192.168.*.*) . It means docker daemon has issues to work with flannel network and not associated with flannel CNI well.By default, they should. It turns out it is related to broadcom driver with UEK4 kernel.
Refer: github doc
see terr## Disable TX checksum offloading so we don't break VXLAN
######################################
BROADCOM_DRIVER=$(lsmod | grep bnxt_en | awk '{print $1}')
if [[ -n "$${BROADCOM_DRIVER}" ]]; then
echo "Disabling hardware TX checksum offloading"
ethtool --offload $(ip -o -4 route show to default | awk '{print $5}') tx off
fiaform-kubernetes-installer)
So we need to turn off checksum offload and bounce K8S.
Here are steps (run on all K8S nodes) :
#ethtool --offload $(ip -o -4 route show to default | awk '{print $5}') tx offActual changes:tx-checksumming: off tx-checksum-ipv4: off tx-checksum-ipv6: offtcp-segmentation-offload: off tx-tcp-segmentation: off [requested on] tx-tcp6-segmentation: off
#kubeadm-setup.sh stop#kubeadm-setup.sh restart
Monday, October 22, 2018
Datapatch CDB / PDB hits ORA-06508
Symptom:
When we patch PSU on CDB / PDB , we need to ./datapatch -verbose under OpatchIt reports
Patch 26963039 apply (pdb PDB$SEED): WITH ERRORS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26963039/21649415/
26963039_apply_CASCDBSB_PDBSEED_2018Mar08_01_32_17.log (errors)
Error at line 113749: sddvffnc: factor=Database_Hostname,error=ORA-06508: PL/SQL: could not find ......
Reason:
Patch 21555660 (Database PSU 12.1.0.2.5, Oracle JavaVM Component ) is not in place of CDB/PDBs. It needs outages to upgrade this OJVM to pass thorough.Check both CDB and PDBs for that as the component applies to each PDBs
sql: select comp_name, version from dba_registry where comp_name like '%JAVA Virtual Machin%' and status = 'VALID';
Solution:
Upgrade OJVM in CDB and PDBs if not in place. To make sure they are on the page.Saturday, October 20, 2018
How to Create Docker Images For Oracle DB 18.3 APEX 18.1 and ORDS 18.2
Scope:
We would like to containize livesql sandbox. The
purpose is to create docker images for Oracle Database 18.3 , APEX 18.1
ORDS 18.2
Database Part:
- Go to
github and download all the scripts of Database18.3 from Oracle Github
- Refer
readme doc on the github to understand how dockfile works on DB
- put
them into directory (ie /u01/build/db18.3 )
- Download LINUX.X64_180000_db_home.zip
from OTN and put it the same directory as scripts from github
(ie /u01/build/db18.3)
- If your servers are behind proxy, Add
below 2 lines into Dockerfile to let new image to access internet. (
change the proxy name if necessary)
- HTTP_PROXY=http://yourproxy.com:80
- HTTPS_PROXY=http://yourproxy.com:80
- cd /u01/build/db18.3
and docker build -t oracle/database:18.3.0-ee
.
- It
will build the image for Database 18.3 ( use docker images to check )
- To
create volumes outside docker to hold all datafiles and related config
files
- mkdir
-p /u01/build/db18.3/oradata
- chown -R 54321:54321 /u01/build/db18.3/oradata (54321 is the UID of oracle user from Docker image)
docker run -itd --name testdb -p 1528:1521 -p 5500:5500 -e ORACLE_SID=LTEST -e ORACLE_PDB=ltestpdb -e ORACLE_PWD=<password> -v /u01/build/db18.3/oradata:/opt/oracle/oradata oracle/database:18.3.0-ee
- it
will create a new CDB with name LTEST and a new PDB with name ltestpdb
for you
- We
can run this command again and again. It will detect the DB was created ,
not create a new one
- use
'docker logs testdb' to check status
- use
'docker exec -t testdb /bin/bash' to get
into the docker container to inspect
APEX 18.1 Part:
- Go to
otn
- Download
apex18.1 zip
- upload
it to /u01/build/db18.3/oradata/ and unzip it
- chown
-R 54321:54321 ./apex
- use
'docker exec -t livesql_testdb /bin/bash' get
into the docker container
- cd
/opt/oracle/oradata/apex
- sqlplus
/ as sysdba
- alter
session set container=ltestpdb;
- install APEX inside the docker container
@apexins SYSAUX SYSAUX TEMP /i/
— Run the apex_rest_config command
@apex_rest_config.sql
- Change and unlock the apex related accounts
- alter user APEX_180100 identified by <password>;
- alter user APEX_INSTANCE_ADMIN_USER identified by <password>;
- alter user APEX_LISTENER identified by <password>;
- alter user APEX_PUBLIC_USER identified by <password>;
- alter user APEX_REST_PUBLIC_USER identified by <password>;
- alter user APEX_180100 account unlock;
- alter user APEX_INSTANCE_ADMIN_USER account unlock;
- alter user APEX_LISTENER account unlock;
- alter user APEX_PUBLIC_USER account unlock;
- alter user APEX_REST_PUBLIC_USER account unlock;
ORDS 18.2 Part:
- Go to
github and download all the scripts of ORDS 18.2 from Oracle GitHub
- Refer
readme doc on the github to understand how dockfile works on ORDS
- Download
ORDS 18.2 from OTN
- put
them into directory (ie /u01/build/ords )
- cd
/u01/build/ords and docker build -t
oracle/restdataservices:v1 .
- It
will build docker images for ORDS
- To
create volumes outside docker to hold all datafiles and related config
files
- mkdir
-p /u01/build/ords/config/ords
- chown -R 54321:54321 /u01/build/ords/config/ords (54321 is the UID of oracle user from Docker image)
docker run -itd --name testords1 \
--network=ltest_network \
-p 7777:8888 \
-e ORACLE_HOST=<hostname> \
-e ORACLE_PORT=1528 \
-e ORACLE_SERVICE=ltestpdb \
-e ORACLE_PWD= <password> \
-e ORDS_PWD=<password> \
-v /u01/build/ords/config/ords:/opt/oracle/ords/config/ords \
oracle/restdataservices:v1
- it
will create a new ORDS standalone and install ORDS schema for you
- We
can run this command again and again. It will detect the config file
which was created , not create a new one
- use
'docker logs testords1 ' to check status
- use
'docker exec -t testords1 /bin/bash' to get
into the docker container to inspect
Thursday, October 18, 2018
How To Associate Pod With Service In Kubernetes
Answer is to use selector
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
nodePort: 30301
clusterIP: None
This specification will create a Service which targets TCP port 3306 on Pods with the app: mysql label, in other words, any pods with label app:mysql would be associated with this mysql-service automatically in Kubernetes
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
nodePort: 30301
clusterIP: None
This specification will create a Service which targets TCP port 3306 on Pods with the app: mysql label, in other words, any pods with label app:mysql would be associated with this mysql-service automatically in Kubernetes
Tuesday, October 16, 2018
High availability of Oracle DB Pod Practice via Kubernetes Statefulset
Requirement:
It is similar as Oracle Rac One Architecture.The target is to use Kubernetes to manage Oracle DB Pods like Rac One. It has most benefits of Rac One has .But K8S can't start 2 db pods simultaneously to enable zero downtime. Details of Rac One benefits, please refer oracle Rac One official website
When one db pod dies or node dies, Kubernetes would start a new DB pod in the same node or another node. The datafiles are on Oracle File system (NFS) and they can be accessed by all nodes associated with Oracle DB Pods. In this example it is labeled as ha=livesqldb
Solution:
- Need to make sure the nodes which can run DB pods has the same access to the NFS
- Label nodes with ha=livesqlsb, in our case we have 2 nodes labeled
kubectl label node instance-cas-db2 ha=livesqlsb
node "instance-cas-db2" labeled
kubectl label node instance-cas-mt2 ha=livesqlsb
node "instance-cas-mt2" labeled
- Need to create StatefulSet and replicas: 1 ,yaml is like
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: livesqlsb-db
labels:
app: livesqlsb-db
spec:
selector:
matchLabels:
ha: livesqlsb
serviceName: livesqlsb-db-service
replicas: 1
template:
metadata:
labels:
ha: livesqlsb
spec:
terminationGracePeriodSeconds: 30
volumes:
- name: livesqlsb-db-pv-storage1
persistentVolumeClaim:
claimName: livesql-pv-nfs-claim1
containers:
- image: oracle/database:18.3v2
name: livesqldb
ports:
- containerPort: 1521
name: livesqldb
volumeMounts:
- mountPath: /opt/oracle/oradata
name: livesqlsb-db-pv-storage1
env:
- name: ORACLE_SID
value: "LTEST"
- name: ORACLE_PDB
value: "ltestpdb"
- We use kubectl drain <nodename> --ignore-daemonsets --force to test node eviction. It would shutdown the pod gracefully and wait 30s to start a new pod in another node
- Or kubectl delete pod <db pod name> to test pod eviction. It would shutdown the pod gracefully and wait 30s to start a new pod in the same node
Monday, October 15, 2018
Differences among Port TargetPort nodePort containerPort in Kubernetes
We use below below yaml to explain:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3309
targetPort: 3306
nodePort: 30301
clusterIP: None
This specification will create a Service which targets TCP port 3306 on any Pod with the app: mysql label, and expose it on an abstracted Service port 3309 (targetPort 3306: is the port the container accepts traffic on, port 3309: is the abstracted Service port, which can be any port other pods use to access the Service). nodePort 30301 is to expose service outside kubernete cluster via kube-proxy.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
selector:
app: mysql
ports:
- port: 3309
targetPort: 3306
nodePort: 30301
clusterIP: None
This specification will create a Service which targets TCP port 3306 on any Pod with the app: mysql label, and expose it on an abstracted Service port 3309 (targetPort 3306: is the port the container accepts traffic on, port 3309: is the abstracted Service port, which can be any port other pods use to access the Service). nodePort 30301 is to expose service outside kubernete cluster via kube-proxy.
- The port is 3309 which represents that order-service can be accessed by other services in the cluster at port 3306(it is advised to be same as targetPort). However when type is LoadBalancer, the port 3309 would be on different scope. It is the service port on LoadBalancer. It is the port which LoadBalancer is listening on. Because type is not clusterIP any more.
- The targetPort is 3306 which represents the order-service is actually running on port 3306 on pods
- The nodePort is 30301 which represents that order-service can be accessed via kube-proxy on port 30301.
- containerPort which is similar as targetPort , it is used in pod defination yaml
How To Use Python To Backup DB Files To Oracle OCI Object Storage
Please refer my other blogs related. Not just for DB files, archivelogs ...etc but all files in OS.
Python3 OCI SDK Create Bucket and Upload Files Into OCI Object Storage
Python3 OCI SDK Download And Delete Files From OCI Object Storage
Python3 OCI SDK Create Bucket and Upload Files Into OCI Object Storage
Python3 OCI SDK Download And Delete Files From OCI Object Storage
Saturday, October 13, 2018
How To Add PersistentVolume of K8S From Oracle OCI File System(NFS)
You need to create File system and mount targets in OCI first, then we can let K8S to mount them and use . Please refer official Oracle Doc
Then to create NFS PV , PVC in K8S
Then to create NFS PV , PVC in K8S
- Create Peresistent Volumes DB NFS Files storage. /cas-data is the mount target created in OCI File system . yaml is like
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-nfs-volume1
spec:
capacity:
storage: 300Gi
accessModes:
- ReadWriteMany
nfs:
path: "/cas-data"
server: 100.106.148.12
- Create Persistent Volumne Claim for DB NFS file storage. yaml is like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-nfs-claim1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 300Gi
How To Create Oracle 18.3 DB on NFS In Kubernetes
Requirement:
We have existing docker images for Oracle DB 18.3 which is running fine.We need to move them to kubernetes cluster which is running on the same host.
Solution:
- Label nodes for nodeSelector usages
kubectl label nodes instance-cas-db2 dbhost=livesqlsb
kubectl label nodes instance-cas-mt2 mthost=livesqlsb
- To Create: kubectl create -f <yaml file>
- Create Peresistent Volumes DB NFS Files storage . yaml is like
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-nfs-volume1
spec:
capacity:
storage: 300Gi
accessModes:
- ReadWriteMany
nfs:
path: "/cas-data"
server: 100.106.148.12
- Create Persistent Volumne Claim for DB NFS file storage. yaml is like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-nfs-claim1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 300Gi
- Create Service for DB to be accessed by other Apps in the K8S cluster. yaml is like
apiVersion: v1
kind: Service
metadata:
labels:
app: livesqlsb-db
name: livesqlsb-db-service
namespace: default
spec:
clusterIP: None
ports:
- port: 1521
protocol: TCP
targetPort: 1521
selector:
app: livesqlsb-db
- Create DB Pod in the K8S cluster. yaml is like
apiVersion: v1
kind: Pod
metadata:
name: livesqlsb-db
labels:
app: livesqlsb-db
spec:
volumes:
- name: livesqlsb-db-pv-storage1
persistentVolumeClaim:
claimName: livesql-pv-nfs-claim1
containers:
- image: oracle/database:18.3v2
name: livesqldb
ports:
- containerPort: 1521
name: livesqldb
volumeMounts:
- mountPath: /opt/oracle/oradata
name: livesqlsb-db-pv-storage1
env:
- name: ORACLE_SID
value: "LTEST"
- name: ORACLE_PDB
value: "ltestpdb"
nodeSelector:
dbhost: livesqlsb
How To Push/Pull Docker Images Into Oracle OKE Registry
Requirement:
We have built some customized docker images for our apps. We need to upload it to OKE registry and being used by OKE engineer later. Please refer official oracle docSolution:
- Make sure you have correct privileges to push images to OCI registry. You need your tenancy admin to update the policies to allow you to do that
- Generate Auth Token from OCI user settings. see details in official oracle doc
- On the host where your docker images are, use docker to login
docker login phx.ocir.io (we use phoenix region)
If users are federated with another directory services
If users are federated with another directory services
Username: <tenancy-namespace>/<federation name>/test.test@oracle.com
i.e. mytenancy-namespace/corp_login_federate/test.test@oracle.com
If no federation, remove <federation name>
i.e. mytenancy-namespace/corp_login_federate/test.test@oracle.com
If no federation, remove <federation name>
Password: <The Auth token you generated before>
Login succeed.
- Tag the images you would like to upload
docker tag hello-world:latest
<region-code>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>
docker tag hello-world:latest phx.ocir.io/peo/engops/hello-world:latest
- Remember to add "repo-name"
- Push the image to registry
docker push phx.ocir.io/peo-namespace/engops/hello-world:latest
- Pull the image
docker pull phx.ocir.io/peo-namespace/engops/hello-world
- To use it in K8S yaml file, we need to add secret for docker login. Refer k8s doc and oci doc for details
kubectl create secret docker-registry iad-ocir-secret --docker-server=iad.ocir.io --docker-username='<tenancy-namespace>/<federation name>/test.test@oracle.com' --docker-password='******' --docker-email='test@test.com'
part of sample yaml is like
part of sample yaml is like
spec:
containers:
- name: helloworld
# enter the path to your image, be sure to include the correct region prefix
image: <region-code>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>
ports:
- containerPort: 80
imagePullSecrets:
# enter the name of the secret you created
- name: <secret-name>
Python3 OCI SDK Download And Delete Files From OCI Object Storage
Requirement:
We need to use OCI object storage for our backup purpose. We need to download backup files , also we need to delete obsolete backup files.Before we do that, we need to setup config file for OCI SDK to get correct user credential, tenancy, compartment_id ...etc. Refer my blog for example:
Solution:
Download files example:
#!/u01/python3/bin/python3
import oci
import argparse
parser = argparse.ArgumentParser(description= 'Download files from Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to download from ')
parser.add_argument('files_location',help='The full path of location to save downloaded files, ie /u01/archivelogs')
parser.add_argument('prefix_files',nargs='*',help='The filenames to download, No wildcard needed, ie livesql will match livesql*')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
mybucketname = args.bucketname
retrieve_files_loc = args.files_location
prefix_files_name = args.prefix_files
print(args)
config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
listfiles = object_storage.list_objects(namespace,mybucketname,prefix=prefix_files_name)
#print(listfiles.data.objects)
for filenames in listfiles.data.objects:
get_obj = object_storage.get_object(namespace, mybucketname,filenames.name)
with open(retrieve_files_loc+'/'+filenames.name,'wb') as f:
for chunk in get_obj.data.raw.stream(1024 * 1024, decode_content=False):
f.write(chunk)
print(f'downloaded "{filenames.name}" in "{retrieve_files_loc}" from bucket "{mybucketname}"')
import oci
import argparse
parser = argparse.ArgumentParser(description= 'Download files from Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to download from ')
parser.add_argument('files_location',help='The full path of location to save downloaded files, ie /u01/archivelogs')
parser.add_argument('prefix_files',nargs='*',help='The filenames to download, No wildcard needed, ie livesql will match livesql*')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
mybucketname = args.bucketname
retrieve_files_loc = args.files_location
prefix_files_name = args.prefix_files
print(args)
config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
listfiles = object_storage.list_objects(namespace,mybucketname,prefix=prefix_files_name)
#print(listfiles.data.objects)
for filenames in listfiles.data.objects:
get_obj = object_storage.get_object(namespace, mybucketname,filenames.name)
with open(retrieve_files_loc+'/'+filenames.name,'wb') as f:
for chunk in get_obj.data.raw.stream(1024 * 1024, decode_content=False):
f.write(chunk)
print(f'downloaded "{filenames.name}" in "{retrieve_files_loc}" from bucket "{mybucketname}"')
Delete files example:
#!/u01/python3/bin/python3
import oci
import sys
import argparse
parser = argparse.ArgumentParser(description= 'Delete files from Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to delete from ')
parser.add_argument('prefix_files',nargs='*',help='The filenames to delete, No wildcard needed, ie livesql will match livesql*')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
mybucketname = args.bucketname
prefix_files_name = args.prefix_files
config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
listfiles = object_storage.list_objects(namespace,mybucketname,prefix=prefix_files_name)
#print(listfiles.data.objects)
#bool(listfiles.data.objects)
if not listfiles.data.objects:
print('No files found to be deleted')
sys.exit()
else:
for filenames in listfiles.data.objects:
print(f'File in Bucket "{mybucketname}" to be deleted: "{filenames.name}"')
deleteconfirm = input('Are you sure to delete above files? answer y or n :')
if deleteconfirm.lower() == 'y':
for filenames in listfiles.data.objects:
object_storage.delete_object(namespace, mybucketname,filenames.name)
print(f'deleted "{filenames.name}" from bucket "{mybucketname}"')
else:
print('Nothing deleted')
oci.exceptions.ConfigFileNotFound: Could not find config file at /root/.oci/config
import oci
import sys
import argparse
parser = argparse.ArgumentParser(description= 'Delete files from Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to delete from ')
parser.add_argument('prefix_files',nargs='*',help='The filenames to delete, No wildcard needed, ie livesql will match livesql*')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
mybucketname = args.bucketname
prefix_files_name = args.prefix_files
config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
listfiles = object_storage.list_objects(namespace,mybucketname,prefix=prefix_files_name)
#print(listfiles.data.objects)
#bool(listfiles.data.objects)
if not listfiles.data.objects:
print('No files found to be deleted')
sys.exit()
else:
for filenames in listfiles.data.objects:
print(f'File in Bucket "{mybucketname}" to be deleted: "{filenames.name}"')
deleteconfirm = input('Are you sure to delete above files? answer y or n :')
if deleteconfirm.lower() == 'y':
for filenames in listfiles.data.objects:
object_storage.delete_object(namespace, mybucketname,filenames.name)
print(f'deleted "{filenames.name}" from bucket "{mybucketname}"')
else:
print('Nothing deleted')
Make the Script executable without python intepeter
pip install pyinstaller
pyinstaller -F < your python script>
in dist folder, you will see the executable file of your python script.
remember it needs ~/.oci/config and ~/.oci/oci api key , these 2 file to login oracle OCI.
otherwise you may get error like:
Subscribe to:
Posts (Atom)