Is it possible to set up JHipster console on Docker Cloud? My application is deployed on Heroku.
If is there no option, please advise where can I set up docker in cloud.
Regards!
Yes docker cloud is an option although I've never tried it. If you have simple needs and don't need container orchestration on multiple hosts I would recommend creating a simple VM with docker on your favorite cloud provider (using docker-machine for example) and then deploy the console there using docker-compose. It's really easy to do.
1) SSH on your server
2) Install docker and docker-compose
3) Get the docker-compose file from https://github.com/jhipster/jhipster-console/blob/master/bootstrap/docker-compose.yml
4) Run docker-compose up -d
The console will be available on port 5601.
Refer to the docs at : https://jhipster.github.io/monitoring/
More advanced setup are possible but this is the easiest way to go. Also note that it is perfectly possible to run the JHipster-Console without Docker but it requires some work. To do this, setup an ELK stack yourself usinh on simple logstash configuration and scripts to preload the dashboards.
Ok, locally everything is fine. So how (step by step) push JHipster Console to docker-cloud and connect it with my application on heroku?
Related
Now I have two Linux PC,mongodb is in the first PC which IP is 192.168.1.33,and a
java application on another Linux connect to the mongodb on 192.168.1.33
What I want to do is,prepare everything and make both Linux systems into docker images,and when I am in productive environment,I can simply restore the images that I prepared,and everything is OK,so I do not need complex deployment steps.
but the problem is,the IP of mongodb will change,and the IP 192.168.1.33 is written in my configuration file of my java application,it will not change automatically,is there a automated way?
Basics
We create Docker-file with minimal installation steps.
We create docker-Image from that Docker-file in step-1.
We create container from the step-2 image and expose the important port as required.
For your problem.
creating-a-docker-image-with-mongodb This article will help to dockerize the mongodb.
but the problem is,the IP of mongodb will change,and the IP
192.168.1.33 is written in my configuration file of my java application,it
will not change automatically,is there a automated way?
If you expose the mongo-db port to docker host you can use same
docker-host-IP:<exposed-port>
Ref from the article sudo docker run -p 27017:27017 -i -t my_new_mongodb
Example: 192.168.1.33 is your docker-host where mongodb container is running with exposed port 27017. You can add 192.168.1.33:27017 to your JAVA app.
What I want to do is,prepare everything and make both Linux systems
into docker images
You can not convert your VM to direct docker images. Instead you can follow the steps written in Basics and dockerize the both DB and application layer.
2.dockerize-your-java-application refer this link and dockerize you application based on requirements.
Step 1 & 2 will help you to build docker images which you can deploy to multiple servers.
I have a windows service that I want to run in a docker container on Azure.
I would like to have the same setup when running the service locally, so I would like to run the same docker container locally as a windows service (I think?).
How would I do that? Or is there a better approach?
Thanks,
Michael
IMHO Michael asked how to start docker images without the need to have a user logged in. The docker restart flag actually only deals with starting images after docker is running. To get docker to run without logged in user (or after automatic windows updates) it seems to me you will also need to make a windows service that runs docker.
A good explanation for this part of the problem can be found here (no good solution has been found yet without paying for it - docker team ignored request to make this work without third party so far):
How to start Docker daemon (windows service) at startup without the need to log-in?
You can use the flag --restart=unless-stopped with the docker run command and the docker container will run automatically even if the server was shutdown.
Further read for the restart policy and flag here
but conditions apply - docker itself should always run on startup. which is default setting by itself.
I had to perform these steps to deploy my Nodejs/Angular site to AWS via DockerCloud
Write Dockerfile
Build Docker Images base on my Dockerfiles
Push those images to Docker Hub
Create Node Cluster on DockerCloud Account
Write Docker stack file on DockerCloud
Run the stack on DockerCloud
See the instance running in AWS, and can see my site
If we require a small thing changes that require a pull from my project repo.
BUT we already deployed our dockers as you may know.
What is the best way pull those changes into the Docker containers that already deployed ?
I hope we don’t have to :
Rebuild our Docker Images
Re-push those images to Docker Hub
Re-create our Node Cluster on DockerCloud
Re-write our docker stack file on DockerCloud
Re-run the stack on DockerCloud
I was thinking
SSH into a VM that has the Docker running
git pull
npm start
Am I on the right track?
You can use docker service update --image https://docs.docker.com/engine/reference/commandline/service_update/#options
I have not experience with AWS but I think you can build and update automatically.
If you want to treat a Docker container as a VM, you totally can, however, I would strongly caution against this. Anything in a container is ephemeral...if you make changes to files in it and the container goes down, it will not come back up with the changes.
That said, if you have access to the server you can exec into the container and execute whatever commands you want. Usually helpful for dev, but applicable to any container.
This command will start an interactive bash session inside your desired container. See the docs for more info.
docker exec -it <container_name> bash
Best practice would probably be to update the docker image and redeploy it.
Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py
can someone help with how to deploy wso2esb in docker container with kubernetes?
currently im running only one node/master at local machine with ubuntu server 14.04 LTS
if im running with this
sudo docker run --name esb isim/wso2esb
it instantly trigger the service inside the container
but if im running with this
kubectl run esb1 --image=isim/wso2esb
the container just run, without trigger the service inside the container
btw im using isim/wso2esb from docker hub
hope someone can help me..
From the comments above, it looks like you were connecting to the wrong IP address, which you discovered by running kubectl logs esb1.
In general, you can follow the Kubernetes Debugging FAQ when you see an issue like this to see if it is a common problem that has already been documented.