How to access jupyter notebook from docker on server? - azure

I am using a azure instance called NC6 that has a GPU on it. I want to use a tensorflow docker image that can use this GPU that also spins up a jupyter notebook.
I use this command
nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpu
When I run this command from within the instance I get
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=4c495089418941ad470cfe33b002bd6fad67970f84354e29
But when I access the :8888 there is nothing. How can I expose this port to be able to access the notebook from outside the instance?

Although I never used Azure instances I think you have to do the following things:
Open port 8888 of your azure instance to the outside world (this is probably called a service).
Find the IP of you instance (probably defined in this service)
Navigate to http://IP_HERE:8888/?token=4c495089418941ad470cfe33b002bd6fad67970f84354e29
Hope this helps!

Related

How can I find my files in GCP AI Notebook?

I have been working on a GCP AI Notebook for the past couple of weeks when I got '524 error'. I followed the troubleshooting instructions here. I connected to the notebook instance via ssh and restarted the Jupyter service. I am now able to open JupyterLab but I can't find any of my work!! Here is the JupyterLab screenshot. I searched for the files using Terminal in JupyterLab as well as the Cloud Shell but nothing. It looks as if my instance had been wiped clean.
Please help, I lost all my code I have been working on for the past couple of weeks.
Based on the Terminal output, seems to be you are using a Container based instance.
This means that you have a base OS and a Docker instance running JupyterLab service on top. I will be interested in knowing what Docker instance is that you are running. Is this a Deep Learning Container?
By default (If using Deep Learning Containers) files are stored in /home/jupyter and this folder is mapped to local disk so you can see if there is something inside jupyter. Do you have something there?
You can SSH into Jupyter instance and verify which is your container and parameters passed.
sudo docker ps --no-trunc

Host "localhost"overwrites "localhost"in docker

I`m trying to run a vue dashboard in docker container and a service on my local machine. both are binded to "localhost:80". When running both the dashboard and the service in docer, everything works fine, but when running the dashboard in docker and the second service on host machine, the host machine somehow overwrites the access to the docker localhost. so the expected behaviour:
- http://localhost -> should load the dashboard
- http://localhost/graphql -> should load the api for graphql from the second service
so these are working when both running in docker, but not working when one is in docker and the second one running on hist.
Any idea how to solve the issue? The reason why I need to have the second service running on host is that I can debug and code quicker instead of building image after each code change and updating the stack.
Thanks, Zoli.
localhost inside docker refers to the docker container itself. You can't access the actual host from inside docker with that.
However, your actual host has an IP address in the docker network. You can access your actual host using that IP. You can get it by doing ifconfig. Look for a docker interface. In my machine the actual host has the IP 172.17.0.1
problem solved, on host I had to change the port and it started to work. so when both services are running in docker, the port 80 can be assigned to 2 containers and will be resolved correctly. but when one is running it from docker and another on host, it will not work. that is my explanation to this, maybe someone can give a better explanation, but the problem is solved now. thanks.

Persisting content across docker restart within an Azure Web App

I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.

How to scale application with multiple exposed ports and multiple volume mounted by using docker swarm?

I have one Java based application(Jboss version 6.1 Community) with heavy traffic on it. Now I want to migrate this application deployments using docker and docker-swarm for clustering.
Scenario
My application needs two ports exposed from docker container one is web port(i.e.9080) and another one is databases connection port(i.e.1521) and there are few things like logs directory for each container mounted on host system.
Simple Docker example
docker run -it -d --name web1 -h "My Hostname" -p 9080:9080 -p 1521:1521 -v /home/web1/log:/opt/web1/jboss/server/log/ -v /home/web1/license:/opt/web1/jboss/server/license/ MYIMAGE
Docker with Swarm example
docker service create --name jboss_service --mount type=bind,source=/home/web1/license,destination=/opt/web1/jboss/server/license/ --mount type=bind,source=/home/web1/log,destination=/opt/web1/jboss/server/log/ MYIMAGE
Now if I scale/replicate above service to 2 or 3, which host port it will bind and which mount directory will it bind for the newly created containers ??
Can anyone help me to get how scale and replication service will work in this type of scenario ?
I also gone through --publish and --name global but nothing help me in my case.
Thank you!
Supporting stateful containers is still immature in the Docker universe.
I'm not sure this is possible with Docker Swarm (if it is I'd like to know) and it's not a simple problem to solve.
I would suggest you review the Statefulset feature that comes in the latest version of Kubernetes:
https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
It supports the creation of a unique volume for each container in a scale-up event. As for port handling that is part of Kubernetes nornal Service feature that implements container load balancing.
I would suggest building your stack into a docker-compose v3 file, which could be run onto an swarn-cluster.
Instead publishing those ports, you should expose them. That means, the ports are NOT available onto the hostsystem directly, but in the docker-network. Every Composefile got it's own network by default, eg: 172.18.0.0/24. Each Container got's an own ip and makes that Service available other the specified port.
If you scale up to 3 Containers you will got:
172.18.0.1:9080,1521
172.18.0.2:9080,1521
172.18.0.3:9080,1521
You would need a Loadbalancer to access those Services. I do use Jwilder/Nginx if you prefer a container approach. I also can recommand Rancher which comes with an internal Loadbalancer.
In Swarm-mode you have to use the overlay network driver and create the network, otherwise it would just be accessible from the local host itself.
Related to logging, you should redirect your log file to stdout and catch them with an logging driver (fluentd, syslog, graylog2)
For persistent Storage you should have a look at flocker! However Databases might not support those storage implementations. EG: MYsql doesnot support them, mongodb does work with a flocker volume.
It seems like you have to read alot.. :)
https://docs.docker.com/

Is it possible to launch a new Docker container from within a running Docker container using Docker Compose?

I have a Node.js application running inside a Docker Container.
I need to launch a new container from my Node.js application (via code; e.g. child_process.spawn()) with the sole purpose of running a Python script. I also need to pass one argument (a database record ID) to this Python script. So the command is:
python main.py 56fb661b7e51f80736d48113
Note that I do not want this container to run inside the current container but rather to be a separate container.
I understand an orchestration framework such as Swarm or Kubernates would be better suited for this task, but it has been requested that I use Docker Compose locally on my machine in my development environment, and then we will use Kubernates in production.
Is it possible to launch a new Docker container (just a container, not a whole new machine/VM) from within a running Docker container using Docker Compose, and if so, how might I go about doing so?
I haven't done it myself, but from what I gather if your have docker installed on your child container, if you make the docker socket of the host available in the child you are able to interact with it. i.e.
--volume=/var/run/docker.sock:/tmp/docker.sock
You'll need to config your child's docker process to point to that socket (presumably the DOCKER_HOST envvar should work?) but thats the basic idea. Running docker commands against that socket should work on the host.
https://github.com/gliderlabs/registrator use this method which might help give you some pointers.
Obviously, this method of using docker creates a number of issues, but if its best for your situation then go for it.

Resources