Alpine Linux docker set hostname - linux

I'm using lwieske/java-8:server-jre-8u121-slim with Alpine Linux
I'd like to set hostname from a text file to be seen globally (for all shells)
/ # env
HOSTNAME=2fa4a43a975c
/ # cat /etc/afile
something
/ # hostname -F /etc/afile
hostname: sethostname: Operation not permitted
everything running as a service in swarm
i want every node to have unique hostname based on container id.

You can provide the --hostname flag to docker run as well:
docker run -d --net mynet --ip 162.18.1.1 --hostname mynodename

As for workaround, can use docker-compose to assign the hostnames for multiple containers.
Here is the example docker-compose.yml:
version: '3'
services:
ubuntu01:
image: ubuntu
hostname: ubuntu01
ubuntu02:
image: ubuntu
hostname: ubuntu02
ubuntu03:
image: ubuntu
hostname: ubuntu03
ubuntu04:
image: ubuntu
hostname: ubuntu04
To make it dynamic, you can generatedocker-compose.yml from the script.
Then run with: docker-compose up.

docker service create has a --hostname parameter that allows you to specify the hostname. On a more personal note, if you'll connect to one of your services, then any other service on the same network will be pingable and accessible using the service name, with the added benefit of allowing you multiple replicas without worrying about what those will be named.

Better to be late than never. Found this Q trying to find the same thing myself.
The answer is to give the docker container SYS_ADMIN capability and 'hostname -F' will now set the hostname properly.
docker-compose:
cap_add:
- SYS_ADMIN

Related

How to provide hostName into Docker [duplicate]

I run a service inside a container that binds to 127.0.0.1:8888.
I want to expose this port to the host.
Does docker-compose support this?
I tried the following in docker-compose.yml but did not work.
expose:
- "8888"
ports:
- "8888:8888"
P.S. Binding the service to 0.0.0.0 inside the container is not possible in my case.
UPDATE: Providing a simple example:
docker-compose.yml
version: '3'
services:
myservice:
expose:
- "8888"
ports:
- "8888:8888"
build: .
Dockerfile
FROM centos:7
RUN yum install -y nmap-ncat
CMD ["nc", "-l", "-k", "localhost", "8888"]
Commands:
$> docker-compose up --build
$> # Starting test1_myservice_1 ... done
$> # Attaching to test1_myservice_1
$> nc -v -v localhost 8888
$> # Connection to localhost 8888 port [tcp/*] succeeded!
TEST
$>
After inputing TEST in the console the connection is closed, which means the port is not really exposed, despite the initial success message. The same issue occurs with with my real service.
But If I bind to 0.0.0.0 (instead of localhost) inside the container everything works fine.
Typically the answer is no, and in almost every situation, you should reconfigure your application to listen on 0.0.0.0. Any attempt to avoid changing the app to listen on all interfaces inside the container should be viewed as a hack that is adding technical debt to your project.
To expand on my comment, each container by default runs in its own network namespace. The loopback interface inside a container is separate from the loopback interface on the host and in other containers. So if you listen on 127.0.0.1 inside a container, anything outside of that network namespace cannot access the port. It's not unlike listening on loopback on your VM and trying to connect from another VM to that port, Linux doesn't let you connect.
There are a few workarounds:
You can hack up the iptables to forward connections, but I'd personally avoid this. Docker is heavily based on automated changes to the iptables rules so your risk conflicting with that automation or getting broken the next time the container is recreated.
You can setup a proxy inside your container that listens on all interfaces and forwards to the loopback interface. Something like nginx would work.
You can get things in the same network namespace.
That last one has two ways to implement. Between containers, you can run a container in the network namespace of another container. This is often done for debugging the network, and is also how pods work in kubernetes. Here's an example of running a second container:
$ docker run -it --rm --net container:$(docker ps -lq) nicolaka/netshoot /bin/sh
/ # ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 127.0.0.1:8888 *:*
LISTEN 0 128 127.0.0.11:41469 *:*
/ # nc -v -v localhost 8888
Connection to localhost 8888 port [tcp/8888] succeeded!
TEST
/ #
Note the --net container:... (I used docker ps -lq to get the last started container id in my lab). This makes the two separate containers run in the same namespace.
If you needed to access this from outside of docker, you can remove the network namespacing, and attach the container directly to the host network. For a one-off container, this can be done with
docker run --net host ...
In compose, this would look like:
version: '3'
services:
myservice:
network_mode: "host"
build: .
You can see the docker compose documentation on this option here. This is not supported in swarm mode, and you do not publish ports in this mode since you would be trying to publish the port between the same network namespaces.
Side note, expose is not needed for any of this. It is only there for documentation, and some automated tooling, but otherwise does not impact container-to-container networking, nor does it impact the ability to publish a specific port.
According #BMitch voted answer "it is not possible to externally access this port directly if the container runs with it's own network namespace".
Based on this I think it worths it to provide my workaround on the issue:
One way would be to setup an iptables rule inside the container, for port redirection, before running the service. However this seems to require iptables modules to be loaded explicitly on the host (according to this ). This in someway breaks portablity.
My way (using socat: forwarding *:8889 to 127.0.0.1:8888.)
Dockerfile
...
yum install -y socat
RUN echo -e '#!/bin/bash\n./localservice &\nsocat TCP4-LISTEN:8889,fork
TCP4:127.0.0.1:8888\n' >> service.sh
RUN chmod u+x service.sh
ENTRYPOINT ["./service.sh"]
docker-compose.yml
version: '3'
services:
laax-pdfjs:
ports:
# Switch back to 8888 on host
- "8888:8889"
build: .
Check your docker compose version and configure it based on the version.
Compose files that do not declare a version are considered “version 1”. In those files, all the services are declared at the root of the document.
Reference
Here is how I set up my ports:
version: "3"
services:
myservice:
image: myimage:latest
ports:
- "80:80"
We can help you further if you can share the remaining of your docker-compose.yaml.

How to stabilize the port used by docker-compose?

I have an Node.js application that I want to run with docker-compose. Inside the container it listens for port 4321, set by an environment variable.
This port is also exposed by my Dockerfile and I specify it like so in my docker-compose.yml:
version: '3.4'
services:
previewcrawler:
image: previewcrawler
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: development
ports:
- 4321:4321
- 9229:9229
command: ['node', '--inspect=0.0.0.0:9229', 'dist/index.js']
I run the app with a VSCode task, which executes this:
docker run -dt -P --name "previewcrawler-dev" -e "DEBUG=*" -e "NODE_ENV=development" --label "com.microsoft.created-by=visual-studio-code" -p "9229:9229" "previewcrawler:latest" node --inspect-brk=0.0.0.0:9229 .
When I choose to open the application in my browser, it has some crazy port like 49171, which also changes every time I start my container.
How can I make this port stable? So that it is 4321 every time, like I specified in my docker-compose.yml
docker run -P (with a capital P) tells Docker to pick a host port for anything the Dockerfile EXPOSEs. You have no control over which host port or interfaces the port uses.
docker run -p 4321:4321 (with a lowercase p) lets you explicitly pick which ports get published, and on which host port. It is exactly equivalent to the Compose ports: option.
This is further detailed in the Docker run reference.
(That link is more specifically to a section entitled "expose incoming ports". However, "expose" as a verb means almost nothing in modern Docker. Functionally, it does only two things: if you use docker run -P then all exposed ports get published; and if you don't have a -p or -P option at all, the port will be listed in the docker ps output anyways. Exposed ports aren't automatically published, and there's not really any reason to use the docker run --expose or Compose expose: options.)
Apparently I started my app with the wrong command. I now use
docker-compose -f "docker-compose.debug.yml" up -d --build
which works great. The port is also correct then.

Docker Compose Set Linux Docker Host IP Address in Container's /etc/hosts

When using Docker Compose, how can we determine and set the IP address of the Docker host in the /etc/hosts file of the Docker container when the host is running Linux? I will like a hostname like DOCKER_HOST in the container to refer to the Docker host.
Docker for Linux does not yet support host.docker.internal, unlike Docker for Windows and Docker for Mac, so we need an alternative to this.
--network host cannot be used here because for my purpose localhost in the container must still refer to the container itself.
Docker Compose supports using extra_hosts to add a hostname-ip mapping in /etc/hosts, but I am unable to figure out how to automatically determine the host IP address to be used.
version: '3'
services:
api:
build: .
ports:
- "8080:8080"
extra_hosts:
- "DOCKER_HOST:X.X.X.X" # <=== How do we automatically set this IP address?
Is it possible to do something like the following, where we do not have to manually define the Docker host IP address when starting the containers?
DOCKER_HOST=`command_to_get_host_ip` docker-compose up -d
Or can we set a static IP for the Docker host in docker-compose.yml?
You can try :
DOCKER_HOST=$(python -c 'import socket;print(socket.gethostbyname(socket.gethostname()))') docker-compose up -d

Do I need to create multi host network in docker?

For the below docker-compose building docker file dynamically:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
creates bridge type network (project1_default) is created:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
....
f1b7ca7c6dfe project1_default bridge local
....
after running below command:
$ docker-compose -p project1 up -d
After launching and connecting to master jenkins, we configure and create slave jenkins on separate EC2 host using EC2 plugin of master jenkins.
But, above network (project1_default) is single host network, that can allow packets travel within single host. Below is my visualisation of bridge type network (project1_default)...
So, we need to launch and configure master jenkins to launch a slave jenkins on separate EC2 hosts using EC2 plugin,
Do I need to create multi-host network(swarm) type? instead of bridge type (project1_default)...
If yes, how to create a multi-host network?
all three should run in a container? Plus the containers are running on separate ec2 instances, correct?
you could expose the necessary ports to the underlying host IP. This will expose the Jenkins containers to your network and you will be able to interact with it, just as if it was installed directly on the ec2 instance (and not in a container).
Here an example
docker run -p 80:8080 ubuntu bash
this will expose port 8080 in the container to port 80 on your host machine. Start with your master and then the slaves and add your slaves just as you would in a non-container setup by using the ec2 instance's IP and the port that you exposed for the slaves.
You can find more information regarding port publishing here
https://docs.docker.com/engine/reference/commandline/run/

How to connect nodeJS docker container to mongoDB

I have problems to connect a nodeJS application which is running as a docker container to a mongoDB. Let me explain what I have done so far:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a3732cc1d90 mongo:3.4 "docker-entrypoint..." 3 weeks ago Up 3 weeks 27017/tcp mongo_live
As you can see, there is already a mongo docker container running.
Now I'm running my nodeJS application docker container (which is a build from meteorJS):
$ docker run -it 0b422defbd59 /bin/bash
In this docker container I want to run the application by running:
$ node main.js
Now I'm getting the error
Error: MONGO_URL must be set in environment
I already tried to set MONGO_URL by setting:
ENV MONGO_URL mongodb://mongo_live:27017/
But this doesn't work:
MongoError: failed to connect to server [mongo_live:27017] on first connect
So my question is how to connect to a DB, which is - as far as I understand - 'outside' of the running container. Alternativly how do I set up a new DB to this container?
There are couple of ways to do it.
run your app in the same network as your mongodb:
docker run --net container:mongo_live your_app_docker_image
# then you can use mongodb in your localhost
$ ENV MONGO_URL mongodb://localhost:27017/
Also you can link two containers:
docker run --link mongo_live:mongo_live you_app_image ..
# Now mongodb is accessible via mongo_live
use mongodb container ip address:
docker inspect -f '{{.NetworkSettings.IPAddress}}' mongo_live
# you will get you container ip here
$ docker run -it 0b422defbd59 /bin/bash
# ENV MONGO_URL mongodb://[ip from previous command]:27017/
You can bind your mongodb port to your host and use host's hostname in your app
You can use docker network and run both apps in the same network
You could pass --add-host mongo_live:<ip of mongo container> to docker run for your application and then use mongo_live for mongodb url
You can also use docker compose to make your life easier ;)
...
When you run containers each container works in independent network. Because one container cant connect to other point to point.
The are 3 ways to connect containers
Have a little fuss with low-level docker network magic
Connect container through localhost. Each container must expose ports on localhost (as your mongo_live). But you need add to host ile on localhost 127.0.0.1 mongo_live (This is the simplest way)
Use docker-compose. It convenient tool for working many containers together. (This is right way)
Add mongodb to application container is not docker way.
Please use below snippet for your docker-compose.yml file, replace comments with your actuals. Should solve your problem.
version: '2'
services:
db:
build: <image for mongoDB>
ports:
- "27017:27017" # whatever port u r using
environment:
#you can specify mondo db username and stuff here
volumes:
- #load default config for mondodb from here
- "db-data-store:/data/db" # path depends on which image you use
networks:
- network
nodejs:
build: #image for node js
expose:
- # mention port for nodejs
volumes:
- #mount project code on container
networks:
- network
depends_on:
- db
networks:
network:
driver: bridge
Please use the below links for references :
1) NodeJs Docker
2) MongoDb docker
3) docker-compose tutorial
Best of Luck
I had problem how to connect my server.js to mongodb. And that's how i solved it hope you find it useful.
Tap For My Screenshot

Resources