- kubectl delete ****
- kubectl delete **** --force --grace-period=0
- kubectl delete **** --force --grace-period=0 --wait=false
- kubectl version --- it will show both k8s client and server version
Wednesday, December 18, 2019
Tip: Kubectl Delete Options
A few options to delete resources in K8S
Sunday, December 15, 2019
Tip:OPA gatekeeper REGO nodeSelector Constraint Template
Symptom:
We start to use OPA gatekeeper for our kubernetes clusters. Refer https://github.com/open-policy-agent/gatekeeperWe try to enforce all pods and deployment...etc to have a assigned nodeSelector. We had some issues. The details of the issue can be found in github link
Solutions:
Rego template is like this:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sallowednodeselector
spec:
crd:
spec:
names:
kind: K8sAllowedNodeselector
listKind: K8sAllowedNodeselectorList
plural: k8sallowednodeselector
singular: k8sallowednodeselector
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items:
type: object
properties:
key:
type: string
allowedvalue:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sallowednodeselector
key1 := { k | input.review.object.spec.nodeSelector[k] }
key2 := { k | input.review.object.spec.template.spec.nodeSelector[k] }
mykey := key1 | key2
# Make sure all required selectors are implemented in template including deployment, replicaset,sts...
violation[{"msg": msg}] {
provided := mykey
required := {label | label := input.parameters.labels[_].key}
missing := required - provided
expected := input.parameters.labels[_]
count(missing) > 0
msg := sprintf("Missing nodeSelector label <%v: %v>, or too many nodeSelector labels,only 1 nodeSelector lable is allowed.",[expected.key,expected.allowedvalue])
}
#Make sure that ONLY required selectors are used
violation[{"msg": msg}] {
provided := mykey
required := {label | label := input.parameters.labels[_].key}
missing := provided - required
expected := input.parameters.labels[_]
count(missing) > 0
msg := sprintf("Missing nodeSelector label <%v: %v>, or too many nodeSelector labels,only 1 nodeSelector lable is allowed.",[expected.key,expected.allowedvalue])
}
#Make sure all required selectors are implemented in template including deployment, replicaset,sts...
violation[{"msg": msg}] {
value := input.review.object.spec.template.spec.nodeSelector[key]
expected := input.parameters.labels[_]
expected.key == key
not expected.allowedvalue == value
msg := sprintf("Value in Label <%v: %v> does not satisfy allowed value:<%v: %v>", [key,value,expected.key,expected.allowedvalue])
}
#Make sure all required selectors are implemented in pod
violation[{"msg": msg}] {
value := input.review.object.spec.nodeSelector[key]
expected := input.parameters.labels[_]
expected.key == key
not expected.allowedvalue == value
msg := sprintf("nodSelector of Pod <%v: %v> does not satisfy allowed value:<%v: %v>", [key,value,expected.key,expected.allowedvalue])
}
Monday, December 09, 2019
How to Refer Key and Value in Key-Value pair in OPA Gatekeeper in Rego
Symptom:
We start to use OPA gatekeeper for our kubernetes clusters. Refer https://github.com/open-policy-agent/gatekeeper for more details.When we code some policies for kubernetes using OPA (open policy agent) Rego ,we would like to reference "key" name and "value" in nodeSelector key-value pair. ie we have
nodeSelector:I would like to refererence "app" which is key and "test" which is value in our OPA gatekeeper policy .
app: mytest
Solution:
The easy way to do it ismyvalue := input.review.object.spec.nodeSelector[mykey]The value of varible mykey will have "app"
The value variable myvalue will have "mytest"
And they are strings
To get "set" , we need to use special way to achieve it:
To get "set" for key :
provided := {mykey | input.review.object.spec.nodeSelector[mykey]}To get set for value:
provided := {myvalue | myvalue := input.review.object.spec.nodeSelector[_]}
Tip: OPA Rego error minus: operand 1 must be one of {number, set} but got string
Symptom:
We start to use OPA gatekeeper for our kubernetes clusters. Refer https://github.com/open-policy-agent/gatekeeperWhen we code some policies for kubernetes using OPA (open policy agent) Rego , the part of code is like below
violation[{"msg": msg}] {
provided := input.review.object.spec.nodeSelector[label]
required := input.parameters.labels[_].key
missing := required - provided
expected := input.parameters.labels[_]
count(missing) > 0
msg := sprintf("Missing nodeSelector label <%v: %v>, or too many nodeSelector labels,only 1 nodeSelector lable is allowed.< %v:%v>",[expected.key,expected.allowedvalue,provided,required])
eval_type_error: minus: operand 1 must be one of {number, set} but got string): error when creating "access-pod.yaml": admission webhook "validation.gatekeeper.sh" denied the request: admission.k8s.gatekeeper.sh: templates["admission.k8s.gatekeeper.sh"]["K8sAllowedNodeselector"]:5: eval_type_error: minus: operand 1 must be one of {number, set} but got string
Solution:
missing := required - provided , all variables are string, minus operator can't deal with string, so we need to convert them into number or setSo the right code is
provided := {label | input.review.object.spec.nodeSelector[label]}
required := {label | label := input.parameters.labels[_].key}
Wednesday, November 27, 2019
Error: You must be logged in to the server (Unauthorized)
Symptom:
When users try to list pod of OKE (oracle kubernete engine) via kubectl get po. It error out as belowerror: You must be logged in to the server (Unauthorized)
Solution:
It is quite possible the users don't have correct privilege in Oracle OCI IAM. Users need to be in a group which has a policy "USE" or higher "MANAGE" for OKE clusters.ie Allow group <group-name> to use cluster-family in <location>
Saturday, November 09, 2019
Tip: RBAC Comparison Oracle DB vs Kubernetes
This is for Oracle DBA to better understand how Kubernetes RBAC
works. They
both have similar RBAC concepts
Oracle Database | Kubernetes |
dba role | cluster-admin role |
grant dba role | grant cluster-admin role |
create apps-user role to access tablespace example only | create apps-user role to access namespace example only |
create apps-user | create apps-user or service account |
grant apps-user role to apps-user | role-binding apps-user role to apps-user |
apps-users work happily in tablespace example | apps-users work happily in namespace example |
How to Segregate Applications in Kubernetes Cluster without Compromise Cluster-Admin Role
Requirement:
In enterprise world, we often have a few applications running on same Kubernete cluster. Each application owners would like to operate actions on his own applications without interfering other applications. We would not like to grant cluster-admin to application owners for security reasons. Meanwhile application owner would have fully privilege in their own application scope.This is for Oracle DBA to better understand how Kubernetes RBAC works. They both have similar RBAC concepts
Oracle Database | Kubernetes |
dba role | cluster-admin role |
grant dba role | grant cluster-admin role |
create apps-user role to access tablespace example only | create apps-user role to access namespace example only |
create apps-user | create apps-user or service account |
grant apps-user role to apps-user | role-binding apps-user role to apps-user |
apps-users work happily in tablespace example | apps-users work happily in namespace example |
Solution:
- Create namespace for each application
- Cluster admin create role, serviceaccount, rolebinding for each application . Below is an example yaml file
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: test-apps-ns
name: test-role
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: oke-test-user
namespace: test-apps-ns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test-apps-ns
name: test-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: oke-test-user
namespace: test-apps-ns
Wednesday, October 23, 2019
Example of OKE ClusterRolebinding for User OCID of Oracle Cloud
Commands:
$ kubectl create rolebinding hxie-rolebinding --role=livesql-apps --user=ocid1.user.oc1..aaaaa...tx5a$ kubectl create clusterrolebinding <my-cluster-admin-binding> --clusterrole=cluster-admin --user=<user_OCID>
Yaml:
apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBinding
metadata:
creationTimestamp: "2019-10-23T23:24:30Z"
name: hxie_clst_adm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ocid1.user.oc1......uvl7ria
Refer doc: https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengaboutaccesscontrol.htm
Sunday, September 29, 2019
Tip: NC to test Kubernetes DNS Port
Kube-DNS listens on port 53 UDP
UDP port 53:
nc -vzu 10.96.5.5 53
TCP port: 53:
nc -vz 10.96.5.5 53
UDP port 53:
nc -vzu 10.96.5.5 53
TCP port: 53:
nc -vz 10.96.5.5 53
Thursday, September 26, 2019
Tip: X-Forwarded-Proto in APEX
The auth scheme is configured to use https. It redirects to EMAIL_INSTANCE_URL if it's not https. Since https terminates at the LB, APEX thinks it has to do this redirect.
There are 2 ways to disable it.
One option is to set the use_secure_cookie_yn flag to N.
The other is to pass the information that we are using https to ORDS and APEX.
You can do that with the X-Forwarded-Proto header
https://webmasters.stackexchange.com/questions/97005/setting-x-forwarded-proto-under-apache-2-4
That should do the trick: RequestHeader set X-Forwarded-Proto "https"
Tip: Sql to create Sql to turn on autoextend for all datafiles
select
'alter database datafile '||''''||file_name||''''||' autoextend on maxsize unlimited;' from dba_data_files;
'alter database datafile '||''''||file_name||''''||' autoextend on maxsize unlimited;' from dba_data_files;
Wednesday, September 18, 2019
Tip: Use Plink in Putty for Bastion Access
Symptom:
When we first to set up plink in putty to bypass bastion. We often get such error"incoming packet was garbled on decryption"
Solution:
There are quite a few reasons for that. One of reason is that on the first time. plink need users to consent if store key in cache or not. As it is on proxy command, thus users can't input, thus we can get this "incoming packet was garbled on decryption" which is nothing relatedTo fix this, we run below command to plink know, next time plink won't ask again.
$ plink opc@<bastion server> -nc <target host>:22
The server's host key is not cached in the registry. You
have no guarantee that the server is the computer you
think it is.
The server's ssh-ed25519 key fingerprint is:
ssh-ed25519 255 d7:56:12:9f:2a:ee:d2:55:24:5a:73:dc:a0:f2
If you trust this host, enter "y" to add the key to
PuTTY's cache and carry on connecting.
If you want to carry on connecting just once, without
adding the key to the cache, enter "n".
If you do not trust this host, press Return to abandon the
connection.
Store key in cache? (y/n) y
Tip: Create tls secret with key cert and ca cert files in Kubernetes
Requirement:
We need to create tls secrets in Kubernetes for our oracle OCI balancer. Refer doc. However, the command only accepts key and cert files."kubectl create secret tls ssl-certificate-secret --key tls.key --cert tls.crt"
There is no option to add the CA certificate file here.
Solution:
We need to combine CA certificate files with the cert file to form 1 cert file for Kubernetes. We simply copy the content of CA certificate files and append at the end of the cert file.Tuesday, August 27, 2019
Tip: Clean evicted pods and dangling docker images
Clean evicted pods
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -cClean dangling docker images.
A dangling image is one that is not tagged and is not referenced by any container.docker image prune -a -f --filter "until=24h"
Wednesday, August 14, 2019
Error:no kind is registered in scheme pkg/runtime/scheme.go:101
Symptom:
When we create controller operator via kubebuilder 2.0 we add deployment type in our controller. But it error out when we "make run""no kind is registered for the type v1.Deployment in scheme \"pkg/runtime/scheme.go:101\"
Solution:
Per kubebuilder 2.0 , "Every set of controllers needs a Scheme, which provides mappings between Kinds and their corresponding Go types."We need to add deployment type and all other related to scheme. Then we can use those objects in our controller.
sample codes :
import (
"flag"
"os"
theapexordsv1 "apexords-operator/api/v1"
"apexords-operator/controllers"
appsv1beta1 "k8s.io/api/apps/v1beta1"
corev1 "k8s.io/api/core/v1"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/runtime"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
appsv1beta1.AddToScheme(scheme)
appsv1.AddToScheme(scheme)
corev1.AddToScheme(scheme)
theapexordsv1.AddToScheme(scheme)
}
Error finding current repository: could not determine repository path from module data, package data, or by initializing a module: go
Symptom:
When we run kubebuilder init, we get below error$kubebuilder init --domain my.domain
2019/05/29 16:23:45 error finding current repository: could not determine repository path from module data, package data, or by initializing a module: go: cannot determine module path for source directory /home/henryxie/go/kubebuilder-src/my.domain/ (outside GOPATH, no import comments)
Solution:
It is due to go mod init is not working properly.To fix it
run go mod init <directory> ie go mod init myfirstcontroller
then kubebuilder init --domain my.domain
Tuesday, August 06, 2019
Tip: How to Git Push Passwordless via SSH for Multiple Internal and External Repositories
Requirement:
Sometimes we have projects. We need to git commit changes for multiple git repositories. Some repositories are internal, some are external. We would like to setup passwordless for them as wellSolution:
- Setup ssh key for all the git repositories , thus we can git commit passwordless . Refer github doc
- Setup ssh config to use proxy to ssh external git repositories if we are behind proxy in intranet, ie github.com
- vi .ssh/config , example below
Host=github.com
ProxyCommand=socat - PROXY:your.proxy.ip:%h:%p,proxyport=3128,proxyauth=user:pwd
- git remote -v
- git remote set-url --add --push origin git+ssh://original/repo.git
- git remote set-url --add --push origin git+ssh://another/repo.git
Monday, August 05, 2019
Automation Tool to Create Http Ords and Loadbalancer in K8S
Requirement:
A kubectl plugin that create http and ords( Oracle Rest Data Services) based on Apex (oracle application express) 19.1
Once we have Apex ready . We often need to provision http and ords for it. We would like to automate http ords and loadbalancer deployment in K8S. Once we have db hostname, port , sys password , apex /ords password. We can deployment a brand new http ords and loadbalancer deployment env via 1 command. We can also delete it via 1 command. ords image is based on docker images of oracle github.
Solution:
Full details and source codes are on github repository
Automation Tool to Create Database 19.2 in K8S
Requirement:
A kubectl plugin that create statefulset of oracle database 19.2 in your Kubernetes cluster or minikube.You get the full power of oracle database 19.2 in about 10-20 min (need more time of first time run to download docker image) and you can access it from laptop (assume ports are open)
Solution:
Full details and source codes are on github repository
Automation Tool to Create Apex 19.1 in K8S
Requirement:
A kubectl plugin to provision Apex(Oracle Application Express). Apex is the foundation of many applications . We often need to provision a apex for test, stage and prod. We would like to automate apex 19.1 deployment on a Oracle DB.
This database can be a DB in Cloud(AWS, Azure, GCP,OCI) , it can be a DB in a VM, it can be DB pod in K8S. Once we have db hostname, port , sys password , we can deployment a brand new Apex 19.1 env via 1 command. We can also delete it via 1 command.
This database can be a DB in Cloud(AWS, Azure, GCP,OCI) , it can be a DB in a VM, it can be DB pod in K8S. Once we have db hostname, port , sys password , we can deployment a brand new Apex 19.1 env via 1 command. We can also delete it via 1 command.
Solution:
Full details and source codes are on github repository
Tip: ClusterFirst vs ClusterFirstWithHostNet in Kubernetes Pod DNS config
Symptom:
We got below error when we start 2 pods in the same Host. 1pod starts successfully and 1 pod is on pending status , can't startup running. Error is like below0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
Reason:
We have 2 entries in the deployment yaml files. It means the pod is using host network, the first pod uses the certain port and the 2nd pod can't use the same port in the host, thus we see above error. After we remove these 2 entries , restart deployment, the issue is fixed. Default dnsPolicy is ClusterFirst. Refer pod dns config settings docdnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
Thursday, July 25, 2019
Error: cni config uninitialized when creating Kubernetes Cluster
Symptom:
When we create kubernetes cluster, we see below error in kubelet logs (journalctl -r -u kubelet)docker can't pull any images from registry thus creation failed
Jul 18 06:13:17 oke-cytsnjqmizt-nsdomrwmnrt-sjr43hcwtea-0 kubelet[17065]: W0718 06:13:17.513278 17065 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 18 06:13:17 oke-cytsnjqmizt-nsdomrwmnrt-sjr43hcwtea-0 kubelet[17065]: E0718 06:13:17.515774 17065 kubelet.go:2167] Container runtime network
not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 18 06:13:22 oke-cytsnjqmizt-nsdomrwmnrt-sjr43hcwtea-0 kubelet[17065]: W0718 06:13:22.518341 17065 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 18 06:13:22 oke-cytsnjqmizt-nsdomrwmnrt-sjr43hcwtea-0 kubelet[17065]: E0718 06:13:22.519319 17065 kubelet.go:2167] Container runtime network
not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Solution:
It turns out that DNS servers have issues at that time, 2 of 3 DNS servers are not working well.The last one is working well. We have to comment the 2 failed DNS servers in /etc/resolv.conf of all worker nodes and leave good one in the resolv.conf. After that, issue is gone
Another possible reason for this issue is: If Pod security Policy is enabled for your kubernetes cluster, you need a policy to let system pods ie kube-dns or flannel...etc to be created in kube-system
Some quotes from https://kubernetes.io/docs/concepts/policy/pod-security-policy/
Pod security policy control is implemented as an optional (but recommended) admission controller. PodSecurityPolicies are enforced by enabling the admission controller, but doing so without authorizing any policies will prevent any pods from being created in the cluster.
Sunday, July 21, 2019
BPF Hello World Examples
What is BPF:
Refer from this docBPF is a highly flexible and efficient virtual machine-like construct in the Linux kernel allowing to execute bytecode at various hook points in a safe manner. It is used in a number of Linux kernel subsystems, most prominently networking, tracing and security (e.g. sandboxing).
BPF in the Linux kernel is allowing to execute bytecode at various hook points in a safe manner. It is used in a number of Linux kernel subsystems, most prominently networking, tracing and security (e.g. sandboxing).
Github BPF Hello World examples
Tuesday, July 09, 2019
Tip to Rolling Restart Kubernetes Deployment Statefulset Daemonset
From kubectl 1.15.0 , kubectl supports rolling restart Kubernetes Deployment Statefulset Daemonset.
kubectl rollout restart deployment <name>
kubectl rollout restart statefulset <name>
kubectl rollout restart daemonset <name>
kubectl rollout restart deployment <name>
kubectl rollout restart statefulset <name>
kubectl rollout restart daemonset <name>
Sunday, June 30, 2019
Error:cannot list resource "deployments" in API group "apps" at the cluster scope
Symptom:
We have operator running in the cluster, it error out when creating deployment. The error is likecannot list resource "deployments" in API group "apps" at the cluster scope
Solution:
It is due to the clusterrole granted to the operator lack of permssion to create deployment.... We need to add such permission in the role as well as statefulsets, secrects ....... The sample of clusterrole is below- apiGroups:
- ""
resources:
- pods
- secrets
- services
- configmaps
verbs:
- '*'
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- '*'
Monday, June 24, 2019
How To Run Docker Without Sudo
- sudo groupadd docker
- sudo usermod -aG docker <username>
- logout all sessions , not only terminals but also desktop
- login again
- to test: docker run hello-world
Saturday, June 15, 2019
Error: expected ';', found '{' in Golang
Symptom:
When we write go code for kubernetes OwnerReference , we get such errorexpected ';', found '{'code is like
var oradbownerref = []metav1.ObjectMeta.OwnerReference{{
Kind: apexords.TypeMeta.Kind,
APIVersion: apexords.TypeMeta.APIVersion,
Name: apexords.ObjectMeta.Name,
UID: apexords.ObjectMeta.UID,
}}
Solution:
It is due to OwnerReference is on metav1 level ,not metav1.ObjectMeta level.Correct code is
var oradbownerref = []metav1.OwnerReference{{
Kind: apexords.TypeMeta.Kind,
APIVersion: apexords.TypeMeta.APIVersion,
Name: apexords.ObjectMeta.Name,
UID: apexords.ObjectMeta.UID,
}}
Thursday, June 13, 2019
Example of Pod Struct with ConfigMap ImagePullSecrets in Client-GO
typeMetadata := metav1.TypeMeta{
Kind: "Pod",
APIVersion: "v1",
}
objectMetadata := metav1.ObjectMeta{
Name: "ordspod",
Namespace: o.UserSpecifiedNamespace,
}
configmapvolume := &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: "test-configmap"},
}
podSpecs := corev1.PodSpec{
ImagePullSecrets: []corev1.LocalObjectReference{{
Name: "test-secret",
}},
Volumes: []corev1.Volume{{
Name: "ords-config",
VolumeSource: corev1.VolumeSource{
ConfigMap: configmapvolume,
},
}},
Containers: []corev1.Container{{
Name: "ordspod",
Image: "ords:v19",
VolumeMounts: []corev1.VolumeMount{{
Name: "ords-config",
MountPath: "/mnt/k8s",
}},
}},
}
pod := corev1.Pod{
TypeMeta: typeMetadata,
ObjectMeta: objectMetadata,
Spec: podSpecs,
}
Tip to Redirect Root of a Domain Url
Requirement:
Sometimes we need to redirect <domain.com> (only the root) to <domain.com>/apex or <domain.com>/appsSolution:
Use RedirectMatchexamples:
RedirectMatch ^/$ /apex or RedirectMatch ^/$ /apps
Monday, June 10, 2019
Error: the server could not find the requested resource
Symptom:
We follow the https://book.kubebuilder.io/ and create an exampleIt error out when we do "make run"
the server could not find the requested resource (put cronjobs.batch.mycrontab cronjob-sample)"}
Solution:
After i check cronjob_types.go , add below, issue is fixed// +kubebuilder:subresource:status
type CronJob struct {
Tip to Understand Stateless and Stateful Firewall Security Rules
Stateful firewall rules keep session state, so it allows two way traffic (both inbound and outbound)
Stateless firewall rules is one way check ACL, control one way traffic
normally it has pair on inbound and outbound rules together
Here is copy of web link
STATELESS Firewalls
Stateless firewalls watch network traffic and restrict or block packets based on source and destination addresses or other static values. They’re not ‘aware’ of traffic patterns or data flows. A stateless firewall uses simple rule-sets that do not account for the possibility that a packet might be received by the firewall ‘pretending’ to be something you asked for.
A stateless firewall filter, also known as an access control list (ACL), does not statefully inspect traffic. Instead, it evaluates packet contents statically and does not keep track of the state of network connections.
Purpose of Stateless Firewall Filters
The basic purpose of a stateless firewall filter is to enhance security through the use of packet filtering. Packet filtering enables you to inspect the components of incoming or outgoing packets and then perform the actions you specify on packets that match the criteria you specify. The typical use of a stateless firewall filter is to protect the Routing Engine processes and resources from malicious or untrusted packets.
STATEFUL Firewall
Stateful firewalls can watch traffic streams from end to end. They are aware of communication paths and can implement various IP Security (IPsec) functions such as tunnels and encryption. In technical terms, this means that stateful firewalls can tell what stage a TCP connection is in (open, open sent, synchronized, synchronization acknowledge or established). It can tell if the MTU has changed and whether packets have fragmented. etc.
Neither is really superior and there are good arguments for both types of firewalls. Stateless firewalls are typically faster and perform better under heavier traffic loads. Stateful firewalls are better at identifying unauthorized and forged communications.
Stateless firewall rules is one way check ACL, control one way traffic
normally it has pair on inbound and outbound rules together
Here is copy of web link
STATELESS Firewalls
Stateless firewalls watch network traffic and restrict or block packets based on source and destination addresses or other static values. They’re not ‘aware’ of traffic patterns or data flows. A stateless firewall uses simple rule-sets that do not account for the possibility that a packet might be received by the firewall ‘pretending’ to be something you asked for.
A stateless firewall filter, also known as an access control list (ACL), does not statefully inspect traffic. Instead, it evaluates packet contents statically and does not keep track of the state of network connections.
Purpose of Stateless Firewall Filters
The basic purpose of a stateless firewall filter is to enhance security through the use of packet filtering. Packet filtering enables you to inspect the components of incoming or outgoing packets and then perform the actions you specify on packets that match the criteria you specify. The typical use of a stateless firewall filter is to protect the Routing Engine processes and resources from malicious or untrusted packets.
STATEFUL Firewall
Stateful firewalls can watch traffic streams from end to end. They are aware of communication paths and can implement various IP Security (IPsec) functions such as tunnels and encryption. In technical terms, this means that stateful firewalls can tell what stage a TCP connection is in (open, open sent, synchronized, synchronization acknowledge or established). It can tell if the MTU has changed and whether packets have fragmented. etc.
Neither is really superior and there are good arguments for both types of firewalls. Stateless firewalls are typically faster and perform better under heavier traffic loads. Stateful firewalls are better at identifying unauthorized and forged communications.
Tip to Verify KubeDNS is Working From Host
Requirement:
Sometimes we need to verify KubeDNS is working from host OS( ie VM ).Solution:
- Find KubeDNS service IP address via command
kubectl run -i --tty busybox --image=busybox --restart=Never -- cat /etc/resolv.conf
- install nc if necessary. ie yum install nc
- ie the IP address is 10.96.5.5 , then run while loop to check each KubeDNS pod is responding. You may get timeout or can't resolve error if one of the pods is not working
while true;do nc -vz 10.96.5.5 53;sleep 3; done
- More debug details please refer K8S doc
Tip Example to Create Tablespace in the Same Location
Requirement:
We need to create a new tablespace. Default datafile will be created on Oracle DB Home if db_create_file_dest is not set . We don't wanna that happen, we would like to use the existing datafile location for the new tablespace.
Solution:
sample sql we use is
declare v_datafile VARCHAR2(100);
begin
select ((select regexp_substr(name,'^.*/\') from v$datafile where rownum = 1)||'livesqldata01.dbf')
into v_datafile from dual;
execute immediate 'create tablespace LIVESQL datafile '''||v_datafile||''' size 50M reuse autoextend on';
end;
/
Sunday, May 12, 2019
Tip to Get Output From A Command Running Inside Pod
Symptom:
We write a client-go program to run a simple command ie pwd inside a pod. It runs fine, No error but I don't see pwd output to the standard OS output. Partial sample code is likefunc ExecPodCmd(o *KubeOperations,Podname,Podnamespace string,SimpleCommand []string) error {
SimpleCommand := []string{"pwd"}
execReq := o.clientset.CoreV1().RESTClient().Post().
Resource("pods").
Name(Podname).
Namespace(Podnamespace).
SubResource("exec")
execReq.VersionedParams(&corev1.PodExecOptions{
Command: PsqlCommand,
Stdin: true,
Stdout: true,
Stderr: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(o.restConfig, "POST", execReq.URL())
if err != nil {
return fmt.Errorf("error while creating Executor: %v", err)
}
err = exec.Stream(remotecommand.StreamOptions{
Stdin: os.Stdin,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: false,
})
if err != nil {
return fmt.Errorf("error in Stream: %v", err)
} else {
return nil
}
}
Solution:
It turns out we need to use bash -c to run the simplecommand to get proper os output.Replace SimpleCommand := []string{"pwd"} with SimpleCommand := []string{"/bin/sh", "-c", "pwd"}
Error: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr
Symptom:
We would like to run a simple comand ie pwd in a pod via client-go. It error on below part of code:err = exec.Stream(remotecommand.StreamOptions{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
Tty: false,
})
Solution:
We can simply to use os.Stdin os.Stdout os.Stderr to get the comand output from pod to our standard os output. Correct code is like:err = exec.Stream(remotecommand.StreamOptions{
Stdin: os.Stdin,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: false,
})
Friday, May 10, 2019
Error: cannot use "k8s.io/api/core/v1".ConfigMapVolumeSource literal (type "k8s.io/api/core/v1".ConfigMapVolumeSource) as type *"k8s.io/api/core/v1".ConfigMapVolumeSource in field value
Symptom:
Related code:o.pgreplicamaster.Spec.Template.Spec.Volumes = []corev1.Volume{
{Name: "pgreplica-config",
VolumeSource: corev1.VolumeSource{
ConfigMap: corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: o.UserSpecifiedCM},
},
},
},
}
Compile Error
cannot use "k8s.io/api/core/v1".ConfigMapVolumeSource literal (type "k8s.io/api/core/v1".ConfigMapVolumeSource) as type *"k8s.io/api/core/v1".ConfigMapVolumeSource in field value
Solution:
Refer VolumeSource k8s doc. ConfigMap suppose to have pointer *ConfigMapVolumeSource, not value of ConfigMapVolumeSource , We need to add &.Meanwhile we can't mix field value and value , refer stackflow link
Correct code is:
o.pgreplicamaster.Spec.Template.Spec.Volumes = []corev1.Volume{
{Name: "pgreplica-config",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: o.UserSpecifiedCM},
},
},
},
}
Error: spec.volumes[1].configMap.name: Required value, spec.containers[0].volumeMounts[1].name: Not found
Symptom:
We use partial yaml below to create a statefulset with mount a volume for configmap.It is successfulapiVersion: apps/v1
kind: StatefulSet
......
spec:
volumes:
- name: pgreplica-config
configMap:
name: pgconfigmap
........
When we try to use client-go to do the same thing.
Related codes
o.pgreplicamaster.Spec.Template.Spec.Volumes = []corev1.Volume{
{Name: "pgreplica-config",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
Items: corev1.KeyToPath{{Key: "name",Path: pgconfigmap}},
},
},
},
}
We hit error
Error: spec.volumes[1].configMap.name: Required value, spec.containers[0].volumeMounts[1].name: Not found
Solution
Refer k8s doc of ConfigMapVolumeSourceThere is LocalObjectReference which we should use, instead of using Items
Update code as below to make it work
o.pgreplicamaster.Spec.Template.Spec.Volumes = []corev1.Volume{
{Name: "pgreplica-config",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: "pgconfigmap"},
},
},
},
}
Example of Updating ConfigMap name in Client-go
Option 1:
o.pgreplicamaster.Spec.Template.Spec.Volumes[0].VolumeSource.ConfigMap.LocalObjectReference = corev1.LocalObjectReference{Name: o.UserSpecifiedCM}is doing the same thing as
Option 2:
o.pgreplicamaster.Spec.Template.Spec.Volumes = []corev1.Volume{{Name: "pgreplica-config",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: o.UserSpecifiedCM},
},
},
},
}
fmt.Printf("%#v\n",o.pgreplicamaster.Spec.Template.Spec.Volumes[0].VolumeSource.ConfigMap.LocalObjectReference)
Thursday, May 09, 2019
Go Exercise: How To use yaml Unmarshal to convert yaml into k8s client go data struct
please see code on github link
Go Exercise: How To use decode yaml into k8s client go data struct
Please see code on github link
Some Packages Match Between Ubuntu and Centos
Symptom:
We have requirements to convert docker images based on ubuntu to centos.There are quite a few packages we need to match ubuntu from centos
After testing, below are the details of we have so far
Solution:
Ubuntu ---> Centos
apt-get ---> yumpython-dev ---> python-devel
apt-get install build-essential ---> yum groupinstall 'Development Tools'
libfreetype6-dev ---> freetype-devel
libpng-dev ---> libpng-devel
libpq-dev ---> postgresql-devel
apache2 ---> httpd
libapache2-mod-wsgi ---> mod_wsgi
Monday, May 06, 2019
SSH via Proxy Socat Connect Tips
Use SOCAT in Linux
ssh -oIdentityFile=VM-PrivateKey.txt -o ServerAliveInterval=5 -o ProxyCommand='socat - "proxy:<proxy server>:%h:%p,proxyport=80"' opc@<hostname or ip address>Use CONNECT in Git Bash
Saturday, May 04, 2019
Error: net/http: TLS handshake timeout via kubectl
Symptom:
When we try to use kubectl logs <pod> or kubectl exec it <pod> /bin/bash ....etc , we get below error:......... net/http: TLS handshake timeout.
While TLS certificates are valid and kubectl get nodes, kubectl cluster-info are working fine
Solution:
Use -v=8 flag to enable more details kubectl rest API call detailsWe found such HTTP 500 error when kubectl contacts API masterserver
GET https://Your-Master-node:6443/api/v1/namespaces/default/pods/test-deployment-6669d6df59-vdnk5/log
I0424 04:47:05.882800 11526 round_trippers.go:408] Response Status: 500 Internal Server Error in 10100 milliseconds
..
I0424 05:28:52.001101 21195 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Get https://10.0.64.2:10250/containerLogs/default/test-deployment-6669d6df59-vdnk5/django: net/http: TLS handshake timeout","code":500}
10.0.64.2 is the private ip of the Node and 10250 is the listening port of kubelet
It turns out TLS error is on kubelet side of the node though TLS certificates are valid
kubectl get nodes ,kubectl cluster-info are fine as apiserver don't need to contact kubelet while kubectl logs needs apiserver to contact kubelet
It could be potential a bug. We upgrade k8s of worker node to fix it.
Similar github issue link
Go Excercise: Cobra Example
Please see code on github link
Friday, May 03, 2019
Error: cannot load k8s.io/client-go/pkg/api": cannot find module providing package k8s.io/client-go/pkg/api
Symptom:
When we use client-go api ,there is line of code :api.Codecs.UniversalDeserializer()
We often import "k8s.io/client-go/pkg/api". It was working. Then suddently we hit this error
Error: cannot load k8s.io/client-go/pkg/api": cannot find module providing package k8s.io/client-go/pkg/api
Solution:
The reason of this error is due to code baseline has moved to a new locationInstead of import "k8s.io/client-go/pkg/api"
we should use
import "k8s.io/client-go/kubernetes/scheme"
Then change code to use scheme instead of api
scheme.Codecs.UniversalDeserializer()
Sunday, April 28, 2019
Error:request.go:598:31: not enough arguments in call to watch.NewStreamWatcher have (*versioned.Decoder) want (watch.Decoder, watch.Reporter)
Symptom:
When we build a go program, we hit such error:k8s.io/client-go/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher have (*versioned.Decoder) want (watch.Decoder, watch.Reporter)
It appears there are updates on the request.go which have new requirements
See details of changes log on apimachinary
Solution:
We need to avoid to use the latest master branch of the client-go. Instead we can use stable version of client-go. Fortunately go-modules addresses these problemssee github go-modules
So here are the steps to fix it
- $ export GO111MODULE=on
- In your project location, run : go mod init ---- create go.mod file
- Go build cmd/test.go --- go mod will fetch related files which replace dep ensure
- You will still see the error, that's ok , next step we fix it
- Edit go.mod and replace client-go with correct version
- In this case we use : k8s.io/client-go v0.0.0-20190425172711-65184652c889
- Go build cmd/test.go ---error would be gone
Saturday, April 27, 2019
Error: No route to host while ICMP is working
Symptom:
We try to access a port 1521 service in a VM which has Oracle Linux 7.6 . Ping (ICMP) is working fine. But we get error when we test port via nc or telnetNcat: No route to host.
Solution:
There are quite a few reasons for that.- check your subnet security list, make sure ports are open
- use traceroute and dig to check route table is working as expected
- check target VM linux firewalld is up
Fix it via systemctl stop firewalld ;systemctl disable firewalld
Friday, April 26, 2019
Error: cannot refer to unexported name in Golang
Symptom:
The packages are in the right place and imported ,but when we define variable from a type in another package, it error out when we define myvar a.mytypeError: cannot refer to unexported name ******
package a
type mytype struct {
a int
}
package main
import "a"
func main() {
var myvar a.mytype
}
Solution:
The reason why mytype can't be exported is due to Golang needs uppercase of first letter of a exported type or functions. See more details in stackoverflow linkCorrect one is
package a
type Mytype struct {
a int
}
package main
import "a"
func main() {
var myvar a.Mytype
}
VirtualBox Guest OS Network Connects To Host VPN
Requirement:
We have linux guest OS running in virtualbox win 10. We have vpn running on win 10 host.We have issues to connect to network in guest OS after VPN started. After VPN shutdown, it works fine
Solution:
We need to add natdnsresolver1 to the VM- Set Guest OS to attach to NAT and use adapter to "Paravirutalized network"
- Shutdown Guest OS
- Run VBoxManage.exe list vms
- Run VBoxManage.exe modifyvm <uuid here or name > --natdnshostresolver1 on
- Start Guest OS
- We can disconnect and reconnect host VPN on the fly, Guest OS can pick up connections.
Wednesday, April 17, 2019
Error: Package libseccomp was not found in the pkg-config search path in go build
Symptom:
When we test usage of "github.com/seccomp/libseccomp-golang" , it always error out as belowpackage libseccomp was not found in the pkg-config search path.
Perhaps you should add the directory containing `libseccomp.pc'
Solution:
It turns out we need to manually compile this libseccomp library from code as it is not included in normal pkg.
- download released C code from github
- tar zxvf <release tar file>
- ./configure
- make
- make install
Then we have this library included in our OS, the issue is gone
Monday, April 15, 2019
A few Useful Postgresql Tips
Export data:
pg_dump -U testuser -h <db host> -p 5432 <db name > testdump.sqlImport data (plain sql file):
psql -U testuser <db name> < testdump.sql> testImport.logConnect to remote DB:
psql -U testuser -h <db host> -p 5432 <db name>Connect to remote DB with password
psql "dbname=<db name> user=postgres password=*** host=<db host> port=5432"Grant privileges:
drop database testdb;create database testdb;
create role testuser ;
alter role testuser createdb;
alter role testuser login;
alter role testuser createrole;
Grant an user to be a superuser:
ALTER USER testuser WITH SUPERUSER;Check db connections in the Postgresql ie client_addr ,client_hostname
select * from pg_stat_activity where datname = 'testdb';Check db parameters ie max_connections
show max_connections;Create a DB with UTF8
CREATE DATABASE "teststg"WITH OWNER "postgres"
ENCODING 'UTF8'
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
TEMPLATE template0;
psql: FATAL: Ident authentication failed for user “postgres”
Symptom:
psql connect to postgresql server, get below error:psql: FATAL: Ident authentication failed for user “postgres”
Solution:
By default , the authentication in pg_hba.conf is "ident"We need to replace it with "md5" to use password. After that reload postgres
example to allow apps connections and trust local connections :
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all 0.0.0.0/0 md5
Reserve Source IP via externalTrafficPolicy
Requirement:
We often need to save and check original source IP of clients for audit or analysis. In K8S , Source NAT is enabled by default for NodePort and LoadBalancer.Solution:
We can set externalTrafficPolicy = Local to reserve client source IP. More details in K8S source ip docSquid Proxy Logs Sample ouput:
External Traffic Policy : not set (10.244.1.1 is sourced nat IP)```1554177505.281 0 10.244.1.1 TCP_DENIED/403 4116 CONNECT 140.84.22.11:443 - HIER_NONE/- text/html
1554177510.401 0 10.244.1.1 TCP_DENIED/403 4116 CONNECT 140.84.22.11:443 - HIER_NONE/- text/html```
External Traffic Policy : Local (132.30.131.49 is client IP)
```1554180756.818 0 132.30.131.49 TCP_DENIED/403 3995 CONNECT 140.84.22.11:443 - HIER_NONE/- text/html
1554180984.270 0 132.30.131.49 TCP_DENIED/403 4104 CONNECT 140.84.22.11:443 - HIER_NONE/- text/html```
Use ConfigMap To Store Http.conf For Http Service Pod
Requirement:
Lots of applications would have http service ie apache or nginx as the frontend. We often deploy pods of apache or nginx for it. Take apache httpd as example, we often need to update httpd.conf for rewrite, redirect.....etc. It is ok to build a new docker image to achieve that ,but not efficient . ConfigMap of K8S can store text and binary files in K8S, can be mounted in the pod. So we can leverage that to update httpd.conf without rebuilding the docker images. We can use same concept for all other config files of different apps ie nginx, ords...etcSolution:
- Prepare for Dockerfile and your customized httpd.conf file . Example can be found on github repo
- Once new docker image is built , we need to store httpd.conf into configmap via kubectl
kubectl create configmap httpdconfig --from-file=httpd.conf
- Prepare for deployment yaml to mount configmap in the pod. Partial yaml file is like
volumes:
- name: httpd-config
configMap:
name: httpdconfig
containers:
- name: httpd
image: httpd-configmap:v3
imagePullPolicy: IfNotPresent
volumeMounts:
- name: httpd-config
mountPath: /mnt/k8s
ports:
- containerPort: 80
- Kubectl command of updating httpd.conf after we have new version of httpd.conf
kubectl create configmap httpdconfig --from-file=httpd.conf -o yaml --dry-run | kubectl replace -f -
- Bounce the http pod to let new pod read the new configmap
- It is the same concept and process for any other apps which have config files ie ORDS, Nginx.....etc
- Configmap supports binary file as well ie wallets see other note
Thursday, April 11, 2019
Warning: 199 APEX "HTTP request but need HTTPS" on Apache Reverse Proxy
Symptom:
We have APEX and ORDS running on port 8888. We have TLS/SSL enabled on LoadBalancer. We have reverse proxy configuration for Http and OrdsProxyPass "/apex" "http://localhost:8888/apex" retry=60
ProxyPassReverse /apex http://localhost:8888/apex
ProxyPreserveHost On
When we apex applications are not verifying HTTPS connections, all are fine. After apex applications start to verify HTTPS connections, error out though we have TLS on Loadblanancer
Warning: 199 APEX "HTTP request but need HTTPS"
Solution:
It turns out issue on type Loadbalancer we created. By default it is on TCP-443, so it is on Transport Layer , it has no idea it is https or http, connections pass to apex application is TCP connections with port 443. So apex application would not regard it as https.We need to change Loadbalancer type to HTTP -443 which is Application Layer, in this way, apex application can see it is https, thus the issue is gone.
In OKE service yaml file , we can add below to inform OCI LB to use "HTTP"
service.beta.kubernetes.io/oci-load-balancer-backend-protocol: "HTTP"
Tips for Apache Reverse Proxy
- It is fine from HTTPS --> HTTP
- Need extra work for HTTP --> HTTPS . SSLProxyEngine --> ON Apache link stackoverflow link
- HTTPS --> HTTPS is similar as HTTP --> HTTPS
Monday, April 08, 2019
Error :no available volume zone in Kubernetes
Symptom:
When we create deployment/statefulset/pod in OKE (Oracle Kubernete Engine), somehow we hit below error:Warning FailedScheduling 3s (x7 over 3m) default-scheduler 0/3 nodes are available: 1 node(s) didn't match node selector, 2 node(s) had no available volume zone.
Solution:
One of the reasons is the we use OKE auto provision for our block volume storage. It has a constraint that block volume need to be the same AD (availability zone) as VM. In that case the block volume is created in different AD, the pod can't access the block volumeTo fix that, we just need to adjust the label to let pod be created in the same AD as block volume.
Sunday, April 07, 2019
Go Implementation For smallest-window-in-a-string-containing-all-the-characters-of-another-string
This note is to add Go implementation of below problem
https://practice.geeksforgeeks.org/problems/smallest-window-in-a-string-containing-all-the-characters-of-another-string/0
Golang playground url : https://play.golang.org/p/ao9J2Y4veUt
Github code url
https://practice.geeksforgeeks.org/problems/smallest-window-in-a-string-containing-all-the-characters-of-another-string/0
Golang playground url : https://play.golang.org/p/ao9J2Y4veUt
Github code url
Go Implementation For generate-binary-string
This note is to add Go implementation of below problem
https://practice.geeksforgeeks.org/problems/generate-binary-string/0
Golang playground url : https://play.golang.org/p/VsJ8RIurpwm
Github code url
https://practice.geeksforgeeks.org/problems/generate-binary-string/0
Golang playground url : https://play.golang.org/p/VsJ8RIurpwm
Github code url
Go Implementation For longest-k-unique-characters-substring
This note is to add Go implementation of below problem
https://practice.geeksforgeeks.org/problems/longest-k-unique-characters-substring/0
Golang playground url :https://play.golang.org/p/fINrPuXtBKA
Github code url
https://practice.geeksforgeeks.org/problems/longest-k-unique-characters-substring/0
Golang playground url :https://play.golang.org/p/fINrPuXtBKA
Github code url
Go Implementation For remove-b-and-ac-from-a-given-string
This note is to add Go implementation of below problem
https://practice.geeksforgeeks.org/problems/remove-b-and-ac-from-a-given-string/0
Golang playground url : https://play.golang.org/p/Y5msCtkHzkU
Github code url
https://practice.geeksforgeeks.org/problems/remove-b-and-ac-from-a-given-string/0
Golang playground url : https://play.golang.org/p/Y5msCtkHzkU
Github code url
Go Implementation For Knapsack with Duplicate Items
This note is to add Go implementation of below problem
https://practice.geeksforgeeks.org/problems/knapsack-with-duplicate-items/0
Golang playground url : https://play.golang.org/p/gRnJl-ssfVM
Github code url
https://practice.geeksforgeeks.org/problems/knapsack-with-duplicate-items/0
Golang playground url : https://play.golang.org/p/gRnJl-ssfVM
Github code url
Go Implementation For find-largest-word-in-dictionary
This note is to add Go implementation of below problem
https://practice.geeksforgeeks.org/problems/find-largest-word-in-dictionary/0
Golang playground url : https://play.golang.org/p/3yIbRbMnnrG
Github code url
https://practice.geeksforgeeks.org/problems/find-largest-word-in-dictionary/0
Golang playground url : https://play.golang.org/p/3yIbRbMnnrG
Github code url
Monday, April 01, 2019
Error : /usr/bin/postgresql-setup initdb no such file or directory
Symptom:
When we build postgresql 9.5 docker image, we/usr/bin/postgresql-setup initdb no such file or directory
Solution:
It is due to initdb was not in default PATH . By default, yum installs initdb at /usr/pgsql-9.5/binTo fix that we add below line into Dockerfile
RUN ln -s /usr/pgsql-9.5/bin/initdb /usr/bin/initdb
The full Dockerfile details of Postgresql 9.5 is on github link
Failed to Link Error When Building Postgresql Docker Image
Symptom:
When we build docker image for Postgresql 9.2 9.5 on Oracle Linux 7, we hit below errorfailed to link /usr/share/man/man1/clusterdb.1 -> /etc/alternatives/pgsql-clusterdbman: No such file or directory
failed to link /usr/share/man/man1/createdb.1 -> /etc/alternatives/pgsql-createdbman: No such file or directory
.......
Solution:
It is due to base image Oracle linux does not have such directory (to save space for linux image) /usr/share/man/man1/Add below to Dockerfile to workaround it
RUN mkdir -p /usr/share/man/man1
The full Dockerfile details of Postgresql 9.5 is on github link
Subscribe to:
Posts (Atom)