Question: Is your pod stateful?

Asked By
devinwatson
Asked At
2017-10-09 17:09:49

Found 15 possible answers.

User Answered At Possible Answer
amiko 2017-10-09 17:09:52 no could it be the way that I wrap everything in /bin/bash -c exec command1 command 2... ? so it makes me think there something im missing in kubernetes also it runs for like 16hours it seems like its avialbale in the events for as long as the pod runs and sometime that pod wont exist anymore but is there a way to get the termination message? im thinking of python not finishing the logging its just a job -> pod
mauilion 2017-10-09 17:17:48 that would give the log failure that you see but only if there pod were evicted or killed. if it were killed you'd see it there. or job? maybe get events for that pod? bash would die before python did and reap the children hard.
andrefcruz 2017-10-09 17:22:07 @amiko I realize that. So do I have to mount that binary from the host? Is there a list of these binaries so that I know what to mount? kubelet seems to need t too...
amiko 2017-10-09 17:25:33 get into the container and search for it find / -name modprobe what env are you on? aws google? custom?
yebyen 2017-10-09 17:33:17 @contact Thanks for the tip about CAA, great place, nice bar, good way to spend a day off in an unfamiliar city! Wound up walking over to Lou Malnati's for lunch, would definitely go there again
andrefcruz 2017-10-09 17:39:28 @amiko what do you mean? The binary is not in the container, that why it seems I need to mount it from the host.
amiko 2017-10-09 17:39:42 what are you running? anyone has a better way of running a pod and terminating when it's completed other than using jobs - seems like this is not 100% working how do you run your kubernetes?
andrefcruz 2017-10-09 17:53:31 @amiko I’m running kubernetes on CoreOS vms in Azure.
jpweber 2017-10-09 18:02:32 @amiko you could have the task exit so the container dies and have the pod set to not restart not the cleanest but it can work.
amiko 2017-10-09 18:02:53 i need the pod to be able to start if it crashed it can continue from the last check point re start
jpweber 2017-10-09 18:04:50 you could go down the route of having your job or something else annotate the pod when its complete. Then have some cron job that looks for pods with that annotation and terminates them.
amiko 2017-10-09 18:05:05 how?
estaples 2017-10-09 18:05:12 Hey room, I'm having an issue relating to K8s, AWS security groups, and port 6443. If I change an inbound rule on my K8s security group FROM allowing all inbound traffic on the kubectl port, 6443, TO intra-security group communication and my ip only, then what happens is I’m no longer able to hit my ingress controller on port 30080 anymore and my ELB health checks start failing. I’m wondering if anyone here might have an idea as to what’s going on? The logs are keeping quiet on this issue.
amiko 2017-10-09 18:06:05 you mean run it in a replication controller and have another job that set the annotation on that pod t\so basically a controller
jpweber 2017-10-09 18:06:40 @amiko yeah. or have your code that is running in the pod set the annotation on itself when the job is complete.

Related Questions