Question: What would be the best approach for a persistent storage for multiple docker hosts so that dockers can be cross scheduled?

Asked By
dinodaniel
Asked At
2017-08-31 08:53:09

Found 15 possible answers.

User Answered At Possible Answer
jeremyolliver 2017-08-31 10:26:03 Do you mean storage that’s not specific to the host, so you can run the container on whichever host is up, or do you mean co-ordination?
haani-niyaz 2017-08-31 10:32:37 If there is a directory as /var/app/data with stateful information about a container and in that same directory there is a /var/app/data/config sub-directory that needs to be shared between a container cluster, how would you create the managed volumes for data and config ?
dinodaniel 2017-08-31 10:57:56 @jeremyolliver for storage that's not specific to the host, i guess we can bring a nfs mount. How is it with co-ordination?
while1malloc0 2017-08-31 13:27:40 Not sure if this is better suited for #aws , but I'm curious if anyone here is using ECS in production, and if so what their testing story is around it. Kubernetes has minikube, which is pretty attractive for the early stages of 'running containers in production' migration, and ECS doesn't seem to have a similar facility out of the box so I was curious if people were using something homegrown/OSS.
pwelling 2017-08-31 17:57:01 @while1malloc0 I'd recommend using all the tools in the AWS family if you're going with ECS; it'll make your life a whole lot easier. And, it's certainly a better option than managing Kubernetes outside GCP, imho. Why complicate your life when you can get the same benefits for a lower human cost? I've experience using ECS in prod. With ECS, I tied it to a build/deploy pipeline to automatically promote images all the way up to staging/qa (using something like docker-fastpath), and push to prod manually using Jenkins and CloudFormation. If it comes down to it and you need to migrate or resize or whatever the case might be, there are ways of ensuing 0 downtime in ECS by leveraging container draining strategies in conjunction with ALB, SNS and more CloudFormation templates.
while1malloc0 2017-08-31 18:00:27 Yeah, that's kind of theme of a lot of ongoing discussions at $employer. None of really want to have the ops burden of managing a k8s cluster on AWS and we're not moving off of AWS any time soon. Thanks for the insight.
pwelling 2017-08-31 18:02:16 np
jeremyolliver 2017-08-31 23:57:28 There are a few options for this, e.g. using a cloud specific volume storage like EFS on AWS etc, and co-ordination systems like kubernetes have built in support to make managing that easier There’s also improving ALB integration with kubernetes, e.g. https://github.com/coreos/alb-ingress-controller However - I should point out, especially if you’re just starting with docker, that most of the docker use-cases are for stateless applications, and don’t require storing data permanently like this. If this is your first app running under docker, I’d strongly suggest trying to run a stateless container first, before tackling one with data you need to persist (e.g. a web app, not a db)
alwaysonnet 2017-09-01 00:53:55 any idea of how to resolve this issue ERROR: stat /var/lib/docker/overlay/5c2d4b7d456bd3ff59cddacf5d31ad5e537f6c1cfbda6b708e11bd01314c8139: no such file or directory
wwsean08 2017-09-01 05:05:13 I know it's a bit late, but there's an actual missing file, did you mess with the files under /var/lib/docker/*? Also what are you trying to do when you get this error?
alwaysonnet 2017-09-01 12:56:14 I stopped docker, removed /var/lib/docker and then start it back up. Doing docker-compose pull will then have to re-install everything. It’s time consuming but fixed docker issues.
mattm 2017-09-01 13:50:59 not sure if this is the right place to ask, but I was wondering if anyone else has run into an issue I am having. Basically I have a proxy server running as a container. It's on the same network as some other services. The services don't expose their port and everything goes through the proxy server. The proxy is setup to resolve the ip and port on the docker network. I can get to the services just fine from my host machine or anything on the host network. So far so good. The issue lies when i want to communicate from one docker service to another. I want to be able to use the proxy internally as well, using the dns record that's used for the host network. For instance, nginx, jenkins, sonarqube. I want jenkins to hit sonarqube with sonarqube.mynetwork.com instead of linking them together. The problem is that, sonarqube.mynetwork.com resolves to the external ip in the host network and obviously not the internal one. Is there a way to get around this without exposing the ports and setting the host network:port in the proxy?
fabio.sales 2017-09-01 14:51:03 I had the same issue.. but can't remove '/var/lib/docker' as it says that is in use.. (I stopped docker daemon) so I restart the host and it' works.. no need to reinstall. I think it's a bug with overlay as I never got it with devicemapper
tyagiapurv 2017-09-02 19:09:06 Hey all i have one query suppose I have two sepearte components such as one is store i.e mysql,mongo(and other databases services) and other one is component server i.e php and other web services so i want to deploy my store globally so that server access store easily on same network so suggest the solution here...??
wwsean08 2017-09-03 05:03:09 Not quite sure I understand your question @tyagiapurv , sounds like you suggested the answer yourself of putting them on the same network

Related Questions