Question: how are you mounting those paths into the containers?

Asked By
liggitt
Asked At
2017-11-23 02:58:53

Found 15 possible answers.

User Answered At Possible Answer
igorc 2017-11-23 02:59:07 ah you mean something wrong with the mount maybe?
liggitt 2017-11-23 02:59:23 does the hostPath volume path match up just want to make sure the files you are looking at at that path are the same ones the apiserver inside the container sees at that path
igorc 2017-11-23 03:00:38
      volumeMounts:
all the certs are under /srv/kubernetes
@liggitt this is from the manifest name: logfile type: FileOrCreate path: /var/log/kube-apiserver.log - hostPath: name: srvkube path: /srv/kubernetes - hostPath: name: cacertificates path: /usr/share/ca-certificates - hostPath: name: etcssl path: /etc/ssl - hostPath: volumes: hostNetwork: true name: logfile - mountPath: /var/log/kube-apiserver.log readOnly: true name: srvkube - mountPath: /srv/kubernetes readOnly: true name: cacertificates - mountPath: /usr/share/ca-certificates readOnly: true name: etcssl - mountPath: /etc/ssl
        volumeMounts:
good point!
need to check Ansible playbook then
name: srvkube - mountPath: "{{ k8s_certs_dir }}" readOnly: true name: etcssl - mountPath: /etc/ssl
liggitt 2017-11-23 03:03:56 ok
igorc 2017-11-23 03:04:29 hosts are debian jessie 8.8, just to mention
liggitt 2017-11-23 03:05:53 are you passing --tls-sni-cert-key flags? (that's the only other bit I could see affecting anything)
igorc 2017-11-23 03:06:43 no im not i spent 2 hours but could not figure it out by muself thanks for looking into this by the way
liggitt 2017-11-23 03:20:39 are you sure the apiserver containers are getting restarted? if you inspect the running containers, how long do they say they've been running? there's no caching of certs, so if it really is a new container, there's no place it could be stashing the old cert
igorc 2017-11-23 04:38:27 @liggitt sorry got sidetracked here in th eoffice, they show as runinng for 1h for failover in case the first is down does this support pointing to multiple LDAP servers? @liggitt i was looking into https://github.com/go-ldap/ldap that would actually explain it @liggitt it is possible old pods were running though since i could not connect due to invalid cert and did not check in docker, so ... might be something related with my setup, lets see if someone else reports something similar nothing stored in etcd maybe? the cert i was seeing eas dated from 19th of this month
liggitt 2017-11-23 05:25:22 It doesn't have connection pooling or failover Could build those on top of that lib
igorc 2017-11-23 05:57:12 ok, thanks i have to terminate them in docker i think i see the issue with the cluster ... the pods i kill in k8s are not actually killed in docker and they keep running while k8s shows them as new ones
kamal_gupta_ 2017-11-23 06:54:08 ya it is @liggitt please help me, why its not working but I did the same and it didnt worked for me here I made a typing mistake
sorenmat 2017-11-23 10:09:22 I have an issue where I would like to create a role that can do pretty much anything *but* view secrets, but I can’t seem to get the working :disappointed: Ok, got the roles to work. But I’m still able to access them through kubernetes-dashbaord
But I’m still able not only to list secrets but also get the content of them.. Any ideas on what I could be doing wrong?
  - list
  verbs:
  - secrets
  resources:
  - ""
I’ve created something like this
- apiGroups:
munnerz 2017-11-23 13:21:11 hey - I’m having some issues with api aggregation and garbage collection. I think they are related to RBAC. Right now, if a kubectl delete ns something , the namespace hangs in the Terminating state (until I uninstall my APIService resource). Worth noting that the APIService does *not* reference a pod within the namespace being deleted. I’ve noticed this in my custom apiservers logs however, almost immediately after the DELETE request is made by the garbage collector:
I1123 12:10:11.588] I1123 12:05:11.524439       1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1 ","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/apis/navigator.jetstack.io/v1alpha1","verb":"get"},"user":"system:serviceaccount:kube-system:namespace-controller","group":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"]},"status":{"allowed":false}}
would the recommendation here to be to grant `system:authenticated` and `system:unauthenticated` permission to GET `/apis/navigator.jetstack.io/v1alpha1`? or should I scope that group more tightly
This looks like the namespace controller is attempt to discover resources within navigator.jetstack.io/v1alpha1 , presumably in order to determine if there are any left in that particular namespace? I am not too certain
argonqq 2017-11-23 15:22:44 Hi! I have a new issue with permission. When I try to deploy https://raw.githubusercontent.com/elastic/beats/6.0/deploy/kubernetes/filebeat-kubernetes.yaml Maybe someone has got an Idea :sweat_smile:
Now I receive the following:
Error from server (Forbidden): error when creating "filebeat-kubernetes.yaml": clusterroles.rbac.authorization.k8s.io "filebeat" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]}] user=&{kube admin [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found] kubectl create clusterrolebinding kube-cluster-admin-binding --clusterrole=cluster-admin --user=kube then I tried to give my api account to interact with the cluster some new permissions
Error from server (Forbidden): error when creating "filebeat-kubernetes.yaml": clusterroles.rbac.authorization.k8s.io  "filebeat" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]}] user=&{kube admin [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]
serviceaccount "filebeat" created
clusterrolebinding "filebeat" created
daemonset "filebeat" created
configmap "filebeat-prospectors" created
I received the following error:
configmap "filebeat-config" created

Related Questions