imgproxy on non default port - linux

Ive installed imgproxy (https://docs.imgproxy.net/installation) on docker on a centos server.
Using
docker run -p 2096:2096 -it darthsim/imgproxy
to start it still starts the server on 8080:
INFO [2021-09-01T10:13:25Z] Starting server at :8080
What is a correct way of starting imgproxy on a non default port (in my cas 2096)?

In the docker run command the ports describe a mapping from the host to the container port. (which by default is 8080:8080)
So, to map your host's port 2096 to the container's 8080 use
docker run -p 2096:8080 -it darthsim/imgproxy

The --publish flag config is [host-port]:[container-port]. To preserve the container's port of 8080, but use the host's 2096, you want:
docker run \
--interactive --tty --rm \
--publish=2096:8080 \
darthsim/imgproxy

Related

getting error when using ssh to connect to a docker container

I got a container that has openssh installed and can be connnected via the command
ssh 172.17.0.2.
Now I want to get a port (say 32769) on the host side, and map the port 22 (of docker container) to it, the reason for doing that is I want to get the ssh 127.0.0.1 -p 32769 works on localhost, I got the errors as : ssh_exchange_identification: read: Connection reset by peer . The port mapping is showing normally on docker engine: 0.0.0.0:32769->22/tcp.
Can somebody help me with that? Much appreciated!
Check that the SSH daemon is running in your container first (through a docker exec or docker attach session):
service ssh status
# or
service sshd status
Make sure you have the right IP address
sudo docker inspect -f "{{ .NetworkSettings.IPAddress }}" Container_Name
Use the right SSH URL:
ssh root#172.17.0.2
See more in "How to SSH into a Running Docker Container and Run Commands" from Sofija Simic.
Using a docker run -p 32769:22 is in your case a good idea.
The OP mentions in the discussion an issue with docker proxy:
The docker-proxy was not getting eth0 of the container as -container-ip.
Here is what I've got
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32785 \
-container-ip 172.21.1.2 -container-port 22

Deploying in azure with --net doesn't bind

I am trying to deploy a docker container that has to use other external services so I added the --net flag while running the container.
docker run -it --net=host -p 8090:8000 --env-file=.env -v /etc/logs:/logs --restart=always -d --name=rank rank
But when I check my docker's containers with docker ps -a it doesn't show that the port binding is correctly done. Nevertheless, if I remove the --net flag it runs but it can't communicate to Redis docker.
When using --net=host the port mapping has no effect. You are running the container on your host network, so whichever port the program inside your container is binding to, is bound to the host.
You cannot change the port, the only way would be to have the port inside the container controlled using a environment variable or a config.
docker doesn't through an error when both options are used but it ignores the port mapping as such when --net=host is used. So that is the reason your container runs but doesn't show any port mapping

nginx/apache redirection for output port on docker container on vps

I'm a linux noob in admin of docker container using apache or nginx on VPS.
I use an OVH classic Vps (4go ram, 25Go SSD) with already installed image of ubuntu 15.04 + docker.
Install of docker container is really easy, and in my case i install without problem the image sharelatex.
docker run -d \
-v ~/sharelatex_data:/var/lib/sharelatex \
-p 5000:80 \
--name=sharelatex \
sharelatex/sharelatex
Site is accessible on IP of the VPS at http://51.255.47.40:5000 port show that site work without any problem.
I have already a sub domain (tools.sebastienreycoyrehourcq.fr) configurated to go on the server ip vps (51.255.47.40 routed to External in webfaction panel ), not working, don't understand why.
I install an apache server on 51.255.47.40, but i suppose the best option is probably to install a docker image of nginx or apache ? Can you advice me on this point ? And after that, how can i redirect to 5000 port of the docker image on a classic 80 port of apache or nginx linked to my subdomain ?
Previous answers probably covers most of the issues, especially if there were redirection problems of your domain name.
In order to be fully portable and use all the possibilities of docker, my recommendation would be to used the Nginx official docker image and make it the only one accessible from the outside (with the opening of ports) and use the --link to manage connectivity between your Nginx containers and your other containers.
I have done that in similar situation which works pretty well. Below is a tentative translation of what I have done to your situation.
You start your share latex container without specifying any external port :
docker run -d \
-v ~/sharelatex_data:/var/lib/sharelatex \
--name=sharelatex \
sharelatex/sharelatex
You prepare an nginx conf file for your shareLatex server that you place in $HOME/nginx/conf that will look like
upstream sharelatex {
# this will refer to the name you pass as link to the nginx container
server sharelatex;
}
server {
listen 80;
server_name tools.sebastienreycoyrehourcq.fr;
location ^~ / {
proxy_pass http://sharelatex/;
}
}
You then start your nginx docker container with the appropriate volume links and container links :
docker run -d --link sharelatex:sharelatex --name NginxMain -v $HOME/nginx/conf:/etc/nginx/sites-available -v -p 80:80 kekev76/nginx
ps : this has been done with our own kekev76/nginx image that is public on github and docker but you can adapt the principle to the official nginx image.
nginx-proxy (https://github.com/jwilder/nginx-proxy) and then running sharelatex with VIRTUAL_HOST set to tools.sebastienreycoyrehourcq.fr should be enough to get this working.
e.g.
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
followed by
docker run -d \
-e VIRTUAL_HOST=tools.sebastienreycoyrehourcq.fr \
-v ~/sharelatex_data:/var/lib/sharelatex \
-p 5000:80 \
--name=sharelatex \
sharelatex/sharelatex
The subdomain tools.sebastienreycoyrehourcq.fr is not configured properly. It does not resolve to any IP address which is the reason it does not work.
After you configure your subdomain, you can run the sharelatex container on port 80 with this command:
docker run -d \
-v ~/sharelatex_data:/var/lib/sharelatex \
-p 80:80 \
--name=sharelatex \
sharelatex/sharelatex
This way you can access the app at http://tools.sebastienreycoyrehourcq.fr

How can I forward localhost port on my container to localhost on my host?

I have a daemon on my host running on some port (i.e. 8008) and my code normally interacts with the daemon by contacting localhost:8008 for instance.
I've now containerized my code but not yet the daemon.
How can I forward the localhost:8008 on my container to localhost:8008 on the host running the container (and therefore the daemon as well).
The following is netstat -tlnp on my host. I'd like the container to forward localhost:2009 to localhost:2009 on the host
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2009 0.0.0.0:* LISTEN 22547/ssh
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:2009 :::* LISTEN 22547/ssh
So the way you need to think about this is that Docker containers have their own network stack (unless you explicitly tell it to share the host's stack with --net=host). This means ports need to be exposed both inside the docker container and also on the outside (documentation), when linked with host ports. The ports exposed on the container need to be bound to the host ports explicitly (with -p xxxx:yyyy in your docker run command) or implicitly (using EXPOSE in your Dockerfile and using -P on the command line), like it says here. If your Dockerfile does not contain EXPOSE 8008, or you do not specify --expose 8008 in your docker run command, your container can't talk to the outside world, even if you then use -p 8008:8008 in your docker run command!
So to get tcp/8008 on the host linked with tcp/8008 on the container, you need EXPOSE 8008 inside your Dockerfile (and then docker build your container) OR --expose 8008 in your docker run command. In addition, you need to either use -P to implicitly or -p 8008:8008 to explicitly link that exposed container port to the host port. An example docker run command to do this might look like:
docker run -it --expose 8008 -p 8008:8008 myContainer
It's handy to remember that in the -p 8008:8008 command line option, the order for this operation is -p HOST_PORT:CONTAINER_PORT. Also, don't forget that you won't be able to SSH into your container from another machine on the internet unless you also have this port unblocked in iptables on the host. I always end up forgetting about that and waste half an hour before I remember I forgot to iptables -A INPUT ... for that specific tcp port on the host machine. But you should be able to SSH from your host into the container without the iptables rule, since it uses loopback for local connections. Good luck!
TL;DR: You can use the special hostname host.docker.internal instead of localhost anywhere inside the container that you want to access localhost on the host. Note that:
macOS and Windows versions of Docker Desktop have this feature enabled by default.
Linux hosts (using Docker v 20.10 and above - since December 14th 2020) require you to add --add-host=host.docker.internal:host-gateway to your Docker command to enable the feature.
Docker Compose on Linux requires you to add the following lines to the container definition:
extra_hosts:
- "host.docker.internal:host-gateway"
Full answer: Is the host running MacOS or Windows? Buried in the documentation for Docker Desktop is the fact that there is no docker0 bridge on MacOS and there is no docker0 bridge on Windows. Apparently that's the cause of this. In both cases the workaround (given right after, in a subsection titled "Use cases and workarounds") is to use the special hostname host.docker.internal in placed of localhost anywhere inside the container that you want to access localhost on the host.
If the host is Linux, there are some Linux-only techniques for achieving this. However, host.docker.internal is also useable with a Linux host, but it has to be enabled first. See the Linux part of the TL;DR, above, for instructions.
By this method, in OP's case host.docker.internal:8008 should be used instead of localhost:8008. Note that this is a code or configuration change of the application code running inside the container. There is no need to mention the port in the container configuration. Do not try to use -p or --expose in the docker run commandline. Not only is it not necessary, but your container will fail to start if the host application you want the container to connect to is already listening on that port.
After checked the answers and did some investigation, I believe there are 2 ways of doing that and these 2 only work in Linux environment.
The first is in this post How to access host port from docker container
The second should be set your --network=host when you docker run or docker container create. In this case, your docker will use the same network interface you use in Mac.
However, both ways above cannot be used in Mac, so I think it is not possible to forward from the container to host in Mac environment. Correct me if I am wrong.
I'm not sure if you can do that just with docker's settings.
If my under standing is correct, expose port is not what you looking for.
Instead, establish ssh port forwarding from container to host mightbe the answer.
You can easily use -p 127.0.0.1:8008:8008 and forward the container's port 8008 to localhost's port 8008. An example docker command would be:
docker run -it -p 127.0.0.1:8008:8008 <image name>
If you're doing this on your local machine, you can simple specify the network type as host when starting your container (--network host), which will make your host machine share network with your docker container.
eg:
Start your container:
docker run -it --rm --network host <container>
On your host machine, Run:
python3 -m http.server 8822
Now from your container run:
curl 127.0.0.1:8822
If all went well you'll see traffic on your host terminal.
Serving HTTP on 0.0.0.0 port 8822 (http://0.0.0.0:8822/) ...
127.0.0.1 - - [24/Jan/2023 22:37:01] "GET / HTTP/1.1" 200 -
docker run -d --name <NAME OF YOUR CONTAINER> -p 8008:8008 <YOUR IMAGE>
That will publish port 8008 in your container to 8008 on your host.

How do I know mapped port of host from docker container?

I have a docker container running where I a have mapped 8090 port of host to 8080 port of docker container (running a tomcat server). Is there any way by which I can get the mapped port information from container?
i.e. is there any way by which I can get to know about 8090:8080 mapping from container?
When you link containers, docker sets environment variables which you can use inside one docker to tell how you can communicate with another docker. You can manually do something similar to let your docker know about the host's mapping:
export HOST_8080=8090
docker run -p $HOST_8080:8080 -e "HOST_8080=$HOST_8080" --name my_docker_name my_docker_image /bin/bash -c export
explanation:
export HOST_8080=8090 defines an environment variable on your host (so you won't have to write the "8090" thing twice).
-p $HOST_8080:8080 maps the 8090 port on the host to 8080 on the docker.
-e "HOST_8080=$HOST_8080" defines an environment variable inside the docker, which is called HOST_8080, with value 8090.
/bin/bash -c export just prints the environment variables so you can see this is actually working. Replace this with your docker's CMD.

Resources