Question: exactly, so those should route fine?

Asked By
Asked At
2017-09-20 12:19:38

Found 15 possible answers.

User Answered At Possible Answer
roffe 2017-09-20 12:20:03 Anyone familiar with routing that can answer that @andor the controller manager is using cluster cidr as well so that might need to be restarted as well after you change it i still think you wil have servce disruptions until all components using the old cluster cidr has been restarted and kube-proxy has rebuilt the iptables
andor 2017-09-20 12:24:54 I’m not passing it to the controller manager, only the API server
roffe 2017-09-20 12:26:19 ok, i do since i had probems with the controller manager not recognicing my cluster cidr which lead to some weird problems have to check if i pass it to the apiserver as well i also pass it to the kube-proxy
andor 2017-09-20 12:27:23 I don’t even see a flag for service CIDR on kubeproxy
roffe 2017-09-20 12:27:39 service-cluster-ip-range was it for apiserver @roffe uploaded a file: Untitled and commented: @andor field is clusterCIDR: i use kubectl config for kube-proxy
andor 2017-09-20 12:28:52 that’s the CIDR of pods, not services
roffe 2017-09-20 12:29:10 yeah it is sorry, damn had to go over the deploy scripts and manuals again. several months since i sat that parts up hmm. now im getting uncertain
erhudy 2017-09-20 18:41:37 has anyone ever witnessed a scenario where cross-host container traffic stops working on a particular host and tcpdumping cni0 shows absolutely no traffic? from within the broken host it’s possible to ping pods by their container IPs, but not from any other host in the cluster we’ve been looking at this problem for days now and the only way we’ve found to resolve it is completely rebuilding the host, rebooting doesn’t fix it and neither does kubeadm reset + deleting the node from the API server, then readding it
dcbw 2017-09-20 18:58:47 erhudy: if you tcpdump the veths going into the bridge, do you see traffic from containers?
erhudy 2017-09-20 19:02:25 if i exec into a container and ping out of it to the gateway IP, then tcpdump on the veth, i do see traffic, yes whatever eldritch magicks bind flannel.1 and cni0 together seem to have fallen asunder if i try to ping to that same container from another host, i see the incoming ping requests on flannel.1, but it never makes it to the container network and i see that traffic on cni0 as well
dcbw 2017-09-20 19:06:48 erhudy: that is typically iptables rules and routing table rules
erhudy 2017-09-20 19:07:40 we’ve been looking over all the iptables rules in every table and haven’t found anything, but i’ll keep looking routing tables look correct and there’s no policy-based routing rules
dcbw 2017-09-20 19:08:01 if you run iptables-save and then pastebin that, I can take a quick look and see if anyhting looks odd
erhudy 2017-09-20 19:09:25 any tables to care about besides filter/nat?
cburdick 2017-09-20 19:32:57 tried asking this in users, but I'll try here: does anyone know how the cluster-cidr on the controller relates to the pod-cidr on the kubelet? the kubelet manual says if it's in cluster mode, that isn't used

Related Questions