Question: seh: well, kinda hard to have a different backend pick up any state from the other backend that is now dead, given that it's long-lived seh: are you looking for something like, when the backend dies, have something close the connection so the client makes an new one to a different backend?

Asked By
dcbw
Asked At
2017-08-31 16:29:58

Found 15 possible answers.

User Answered At Possible Answer
seh 2017-08-31 17:01:22 Yes, that’s what I was expecting would happen, but in this case, the backend isn’t _dead_ per se; perhaps it failed its readiness probe, or is no longer selected by the _Service_. But then it turned out that that’s not a consistent result, and sometimes the traffic stops in about a second or two, regardless of the relationship between the node receiving the traffic on the node port and the node hosting the destination server. We observed the _Endpoints_ object change, but then had packets continue to flow for about 15 more seconds—but not always. How strange. • If the server is on the _same host_ as the node receiving the packets to its node port, the connection is *not* terminated. • If the server is on a _different host_ from the node receiving the packets to its node port, the connection is terminated, or at least the requests don’t make it to the server. More interesting findings follow. Sending requests over a long-lived connection through a _Service_ node port to a server that gets taken out of the _Endpoints_ backend set, https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods The documentation here alludes to a different expected behavior (step 7):> (simultaneous with 3), Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations. This is what I hear from the crew testing this internally. I have not witnessed it with my own eyes yet. The server process is still running, and keeps receiving HTTP requests on the open connection, even though it’s no longer part of the backend set.
claytonc 2017-08-31 18:18:45 @seh no it won’t (or at least, it can’t be the default) because otherwise servers can’t do graceful shutdown we don’t want to close the connection when backends drop out of endpoints @seh what network provider?
dcbw 2017-08-31 18:32:57 @claytonc looks like that only happens for TCP though, which makes sense. udp gets their conntrack entries removed when the endpoint goes away I think
mauilion 2017-08-31 18:54:24 This is the problem that circuit breaker code fixes It's tricky to solve lower in the stack
seh 2017-08-31 18:56:32 Hey, @claytonc , sorry for the delay. I was stuck on a phone call. We are using Calico in SoftLayer (via the IBM Bluemix Container Service).
mauilion 2017-08-31 18:56:54 https://lyft.github.io/envoy/docs/configuration/cluster_manager/cluster_circuit_breakers.html
claytonc 2017-08-31 19:50:48 @dcbw ah yeah, would make sense
harshals 2017-09-01 00:35:58 I have a k8s cluster on digitalocean where I'm trying to create an ingress controller but getting following error:
Server Version: version.Info {Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Client Version: version.Info {Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd64"}
$ kubectl version Ingress Controller : https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/nginx-ingress-controller.yaml
F0901 00:34:22.144113       5 launch.go:127] ✖ It seems the cluster it is running with Authorization enabled (like RBAC) and there is no permissions for the ingress controller. Please check the configuration
I0901 00:34:22.141780       5 launch.go:291] Running in Kubernetes Cluster version v1.7 (v1.7.5) - git (clean) commit 17d7182a7ccbb167074be7a87f0a68bd00d58d97 - platform linux/amd64
I0901 00:34:22.123154       5 launch.go:278] Creating API client for  
I0901 00:34:22.122807       5 launch.go:112] Watching for ingress class: nginx
I0901 00:34:22.122743 5 launch.go:109] &{NGINX 0.9.0-beta.12 git-cda42f9 https://github.com/kubernetes/ingress }
aledbf 2017-09-01 00:38:51 @harshals if your cluster has RBAC enabled you need to use this example https://github.com/kubernetes/ingress/tree/master/examples/rbac/nginx
harshals 2017-09-01 00:41:40 @aledbf thanks I will try this out worked perfect [August 31st, 2017 5:38 PM] aledbf: @harshals if your cluster has RBAC enabled you need to use this example https://github.com/kubernetes/ingress/tree/master/examples/rbac/nginx worked perfect
squeed 2017-09-01 11:07:49 @christx2 , you can find @bboreham 's slides at https://www.slideshare.net/weaveworks/introduction-to-the-container-network-interface-cni
radhikapc 2017-09-01 15:27:47 Happy KubeFriday :tada: Peeps please do not forget to add your release notes and check off respective SIG checkbox. Happy CodeFreeeeezzzzzzz :snowflake: :heart: #releaseTeam
jmesser81 2017-09-01 16:36:13 @all As you may know, we have a CNI network plugin for Windows now which we would like to merge code under https://github.com/containernetworking/plugins path. We've tried reaching out to #sig-node to provide guidance on the best way to do this but have not received a resposne yet. Any suggestions?
dcbw 2017-09-01 16:46:33 jmesser81: hi! do you mean you have a CNI plugin that you'd like to get merged into the CNI project iself? jmesser81: the CNI project is a separate one from Kubernetes, but some of the CNI maintainers (myself, @bboreham , @squeed ) also hang out here
chancez 2017-09-01 20:33:31 @aledbf for what purpose does nginx-ingress need to be able to list nodes?

Related Questions