Question: I assume those are all host path?

Asked By
oftaylor
Asked At
2018-01-18 21:51:25

Found 15 possible answers.

User Answered At Possible Answer
clewis 2018-01-18 21:51:53 correct Thanks for the help @oftaylor and @directxman12 Ok, I got everything restarted with -v=4. Hopefully I’ll have more info if it happens again. Pod counts are changing I have log entries for horizontal.go again whew 3 are up Now I have one running I editted the manifest again, it’s trying again.
purduemike 2018-01-18 23:42:19 Is there some development going on for buffering the number of nodes in your cluster? For example I want to make sure I always have 20% capacity available on my cluster. The current scale up process is much to slow (create instance; add to elb; schedule pods on new instances).
maciekpytel 2018-01-19 09:17:22 @bwallace I don't see any way to make CA use the particular strategy you want out of the box; if you want to do some coding you can just write a custom expander - they're pretty easy to write (for example least-waste implementationhttps://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/waste/waste.go ) and in the background CA will scale-up to make space for placeholders again pod priority and preemption ( https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ ) may help you - create a bunch of low-priority placeholder pods and CA will keep space for them, as soon as something higher priority shows up the placeholders will get evicted to make space for it And even if we decide it's just 20% of CPU and mem - which node group should actually have this extra capacity? How to make that decision given different set of taints and labels on different nodes? And what about GPUs and such? @mike298 There were some attempts to do that, but no success so far. Turns out it's pretty hard to define what '20% capacity' actually means given how complex kubernetes scheduling is. Is it just 20% extra resources (CPU, mem)? That will not help you if it's too fragmented, or if your pods use pod affinity or hostports or ... also watch out for --balance-similar-node-groups flag, if instances in your instance group are identical (same instance type and same set of labels) and this flag is set to true CA will try to keep their size identical as much as possible, which is not what you want in this case
bwallace 2018-01-19 17:58:32 Great tips! Thank you… I have a semi solution that seems to be working. But I was definitely thinking about custom expander route. I may end up going that way….
kyleg 2018-01-19 20:43:32 If I use preemptible on GKE, does the autoscaler have any concept of that? Like, if I have an ondemand pool, and I lose my preemptible pool and it switches to on-demand, once preemptible comes back will it know to move the workloads back to preemptible and scale down the on-demand pool? or will it know that if a node comes up that satisfies preferredDuringSchedulingIgnoredDuringExecution will it move it over to it? that would give me the result I want
banjara 2018-01-22 07:36:46 Hello Friends I would really appreciate if someone can help me Kubernetes provides cluster-autoscaler( https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler ) for autoscaling cluster, does it work when I have multiple namespaces? How does scale down impact resource quota of namespace? I know point 1 can be achieved by resource-quota( https://kubernetes.io/docs/concepts/policy/resource-quotas/ ), resource-quotas page also talks about scaling up cluster by writing custom controller. Can I simply use it for scaling down as well? 2. Scale up and down cluster based on current usage. 1. Support multiple tenants with predefined upper limits(resources utilization). I am setting up kubernetes cluster on EC2, I have following requirements.
maciekpytel 2018-01-22 10:33:21 @kyleg CA has a notion of preemptible nodes in the sense that all things being equal it will scale-up a preemptibe nodepool rather than ondemand one, however, currently CA only adds nodes if there is a pod that can't schedule and deletes nodes if they're underutilized also (mostly for performance reasons) CA only cares about scheduler predicates, not priorities - so it respects requiredDuringSchedulingIgnoredDuringExecution , but completely ignores preferredDuringSchedulingIgnoredDuringExecution but in general it can't optimize cluster, in the sense of migrating scheduled pods to more efficient node shapes ( https://github.com/kubernetes/autoscaler/issues/389 )
kerkhove.tom 2018-01-22 10:45:45 Hi, I'm a Kubernetes rookie but I'm looking for some guidance on where to start with autoscaling. Any recommendations for introductions or how to start? For example, is that API exposed externally or should I consume it as part of a container (which I doubt)? Found this but not sure in what version this is available and where to start: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md I've read https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale which is straightforward but I'm a bit confused on the custom metric part. I'd like to use the Horizontal Pod Autoscaler with a custom metric (based on Azure Monitor).
maciekpytel 2018-01-22 11:56:52 @kerkhove.tom Custom metrics API is consumed by HPA. You need to provide it. This is done by running "custom metrics adapter" translating the API to queries in underlying metric system. AFAIK the only currently existing adapters are for Stackdriver and Prometheus.
kerkhove.tom 2018-01-22 12:02:38 That's what I found indeed, but can you write your own? That way I can write an "Azure Monitor" adapter Also, can I scale replicasets via REST API instead? Only find ways to do that via replication controller which is the old way to do it
mhausenblas 2018-01-22 12:11:56 @maciekpytel @directxman12 I’ll have to send regrets for our today’s meeting, got a conflict
maciekpytel 2018-01-22 12:14:33 @kerkhove.tom you can scale replicasets and deployment via REST using /scale subresource (you can try it out with kubectl scale ) @mhausenblas no problem, I know @mwielgus won't be able to join either (i'm still planning to attend though)
mhausenblas 2018-01-22 12:16:54 kk thanks
kerkhove.tom 2018-01-22 12:21:16 @maciekpytel Thanks for the tip, just found it as well. Do you know if writing a custom adapter is recommended/easy to do?
domhauton 2018-01-22 12:37:25 Hey! Is there a recommended way to stop a drained node being deleted in 0.6.X (we’re running k8s 1.7.X) (Usecase: We want to drain nodes for debugging and the CA keeps deleting them. I see you added: cluster-autoscaler.kubernetes.io/scale-down-disabled in 1.0. Was there a workaround before?)

Related Questions