I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.
Related
I have to deploy my application software which is a linux based package (.bin) file on a VM instance. As per system requirements, it needs minimum 8vCPUs and 32GB RAM.
Now, i was wondering if it is possible to deploy this software over multiple containers that load share the CPU and RAM power in the kubernetes cluster, rather than installing the software on a single VM instance.
is it possible?
Yes, it's possible to achieve that.
You can start using docker compose to build your customs docker images and then build your applications quickly.
First, I'll show you my GitHub docker-compose repo, you can inspect the folders, they are separated by applications or servers, so, one docker-compose.yml build the app, only you must run a command docker-compose up -d
if you need to create a custom image with docker you should use this docker command docker build -t <user_docker>/<image_name> <path_of_files>
<user_docker> = your docker user
<image_name> = the image name that you choose
<path_of_files> = somelocal path, if you need to build in the same folder you should use . (dot)
So, after that, you can upload this image to Dockerhub using the following commands.
You must login with your credentials
docker login
You can check your images using the following command
docker images
Upload the image to DockerHub registry
docker push <user_docker>/<image_name>
Once the image was uploaded, you can use it in different projects, make sure to make the image lightweight and usefully
Second, I'll show a similar repo but this one has a k8s configuration into the folder called k8s. This configuration was made for Google cloud but I think you can analyze it and learn how you can start in your new project.
The Nginx service was replaced by ingress service ingress-service.yml and https certificate was added certificate.yml and issuer.yml files
If you need dockerize dbs, make sure the db is lightweight, you need to make a persistent volume using PersistentVolumeClaim (database-persistent-volume-claim.yml file) or if you use larger data onit you must use a dedicated db server or some db service in the cloud.
I hope this information will be useful to you.
There are two ways to achieve what you want to do. The first one is to write a dockerfile and create the image. More information about how to write a dockerfile can be found from here. Apart for that you can create a container from a base image and install all the software and packages and export it as a image. Then you can upload to a docker image repo like Docker Registry and Amazon ECR
We have On-premises software docker image.Also, We have licensing for application security and code duplication.
But to add extra security is it possible to do any of the below ?
Can we lock docker image such that no one can copy or save running container and start new docker container in another environment.
or is it possible to change something in docker image while build that may prevent user to login inside container.
Goal is to secure docker images as much as possible in terms of duplication of the docker images and stop login inside running container to see the configuration.
No. Docker images are a well known format with an open specification that is essentially a set of tar files and some json metadata. Once someone has this image, they can do with it what they want. This includes running it with any options they'd like, coping it, and extending it with their own changes.
I'm using the predefined build of Docker on Azure (Edge Channel) and one of the features is the logging feature. Checking with docker ps on the manager node I saw there is this editions_logger container (docker4x/logger-azure), which catches all the container logs and writes them to an Azure storage account.
How do I use this container directly to get the logs of my containers?
My first approach was to find the right storage and share and download the logs directly from the Azure portal.
The second approach was to connect to the container directly using docker exec -ti editions_logger cat /logmnt/xxx.log
Running docker service logs xxx throws only supported with experimental daemon
All approaches (not the third one though) seem quite over complicated. Is there a better way?
I checked both approaches on our cluster, but we found a fairly easy way to check the logs for now. The Azure OMS approach is really good and i can recommend it, but the setup is too huge for us at the moment. Also the logstash approach is good.
Luckily the tail command supports wildcards and using this we can view our logs nicely.
docker exec -ti editions_logger bash
cd /logmnt
tail -f service_name*
Thank you very much for the different approaches! Im looking forward to the new Swarm features (there is already the docker service logs command, so in the future it should be even easier to check the logs.)
Another way, we can use --volumes to store container logs to Host, then use Logstash to collect logs from the volumes.
In the host machine to open a fixed directory D, and mount the logs to the sub-directory of the D directory, then the mount D to Logstash. In this way, the Logstash container can collect all logs from other containers.
It works like this:
I have one Java based application(Jboss version 6.1 Community) with heavy traffic on it. Now I want to migrate this application deployments using docker and docker-swarm for clustering.
Scenario
My application needs two ports exposed from docker container one is web port(i.e.9080) and another one is databases connection port(i.e.1521) and there are few things like logs directory for each container mounted on host system.
Simple Docker example
docker run -it -d --name web1 -h "My Hostname" -p 9080:9080 -p 1521:1521 -v /home/web1/log:/opt/web1/jboss/server/log/ -v /home/web1/license:/opt/web1/jboss/server/license/ MYIMAGE
Docker with Swarm example
docker service create --name jboss_service --mount type=bind,source=/home/web1/license,destination=/opt/web1/jboss/server/license/ --mount type=bind,source=/home/web1/log,destination=/opt/web1/jboss/server/log/ MYIMAGE
Now if I scale/replicate above service to 2 or 3, which host port it will bind and which mount directory will it bind for the newly created containers ??
Can anyone help me to get how scale and replication service will work in this type of scenario ?
I also gone through --publish and --name global but nothing help me in my case.
Thank you!
Supporting stateful containers is still immature in the Docker universe.
I'm not sure this is possible with Docker Swarm (if it is I'd like to know) and it's not a simple problem to solve.
I would suggest you review the Statefulset feature that comes in the latest version of Kubernetes:
https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
It supports the creation of a unique volume for each container in a scale-up event. As for port handling that is part of Kubernetes nornal Service feature that implements container load balancing.
I would suggest building your stack into a docker-compose v3 file, which could be run onto an swarn-cluster.
Instead publishing those ports, you should expose them. That means, the ports are NOT available onto the hostsystem directly, but in the docker-network. Every Composefile got it's own network by default, eg: 172.18.0.0/24. Each Container got's an own ip and makes that Service available other the specified port.
If you scale up to 3 Containers you will got:
172.18.0.1:9080,1521
172.18.0.2:9080,1521
172.18.0.3:9080,1521
You would need a Loadbalancer to access those Services. I do use Jwilder/Nginx if you prefer a container approach. I also can recommand Rancher which comes with an internal Loadbalancer.
In Swarm-mode you have to use the overlay network driver and create the network, otherwise it would just be accessible from the local host itself.
Related to logging, you should redirect your log file to stdout and catch them with an logging driver (fluentd, syslog, graylog2)
For persistent Storage you should have a look at flocker! However Databases might not support those storage implementations. EG: MYsql doesnot support them, mongodb does work with a flocker volume.
It seems like you have to read alot.. :)
https://docs.docker.com/
I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.