How to access host nodejs app from docker nuxt app? - node.js

I am trying to get my nuxt app working on the production servers. For the local machine, the generated docker image runs well and it can access the nodejs app that runs on localhost. The axios 'baseurl: http://127.0.0.1:6008/' seems working fine, the docker image can access this. On the production servers, i have used docker to setup the nuxt app, the same way i tested on my local machine. Yet the docker nuxt app cannot reach the nodejs app on the host server. I can see this must be some kind of network setting issue.
In vuejs app, i usually setup a proxypass in the apache web conf, to convert the input backend query to match and replace them with localhost address.
ProxyPass /app/query http://localhost:6008/query
The nuxt.config file, axios setting looks lik this:
axios: {
baseURL:'http://127.0.0.1:6008/',
browserBaseURL: ''
},
Does docker needs additional settings or should i configure my apache for this communication between my docker container and a node app that running on host apache pm2 ?

localhost or 127.0.0.1 from within the docker will not resolve to the server's localhost. Instead you need to specify the name of the nodejs service (if you are using docker-compose) or the nodejs docker container name (if you are just using docker run).
You can also try by giving the IP of the server where the docker is running instead of 127.0.0.1

Related

Unable to communicate between docker and external service

I have a docker container running in localhost:3000
Also, I have a node app running in localhost:8081
Now if I want to make post or get request from localhost:3000 to localhost:8001 its not working at all.
Now if run the service as a binary (not a docker file) on localhost:3000 the same API requests works.
How do I communicate if using docker?
Each container runs on it's own bridge network where localhost means the container itself.
To access the host, you can add the option --add-host=host.docker.internal:host-gateway to the docker run command. Then you can access the host using the host name host.docker.internal. I.e. if you have a REST service on the host, your URL would look something like http://host.docker.internal:8081/my/service/endpoint.
The name host.docker.internal is the common name to use for the host, but you can use any name you like.
You need docker version 20.10 for this to work.

Deploying dynamic Nextjs + Nodejs application inside docker using caddy server

I am currently developing a simple portfolio app and my app structure is like this.
Nextjs/client,
Nodejs/server,
Mongodb/db
Nextjs is hosted locally on port 3001, Nodejs app on 5000. Whenever nextjs needs to fetch any api it calls nodejs application. All the things are configured inside docker. I am very new to deploying nextjs application and have recently used caddy server which has automatic https.
I am able to deploy the nextjs application statically using commands
next build
next export
The statically exported file called index.html inside out directory of nextjs application is pointed to caddy server on port 80 and 443. Statically exported app doesn't support api routes which I recently came to know. I tried next build and next start command to generate a dynamic production build inside .next directory. The main problem is How do I point my dynamically generated nextjs application in caddy configuration inside docker container. My present caddy configuration looks like
www.example.com:443 {
tls xyz#email.com
root * /srv
route {
reverse_proxy /api* api-server:5000
try_files {path} {path}/ /index.html
file_server
}
}
I am looking for hints especially related to proxy server.
Thank you in advance
I'm assuming the api url in your frontend looks like this http://localhost:5000 (based on your youtube comment here) which won't work if you're accessing your dockerized app from a remote computer (in this case your computer, since I'm assuming your app is hosted). Try changing it to https://www.example.com:5000 and rebuild your image.

Docker request to own server

I have a docker instance running apache on port 80 and node.js+express running on port 3000. I need to make an AJAX request from the apache-served website to the node server running on port 3000.
I don't know what is the appropiate url to use. I tried localhost but that resolved to the localhost of the client browsing the webpage (also the end user) instead of the localhost of the docker image.
Thanks in advance for your help!
First you should split your containers - it is a good practice for Docker to have one container per one process.
Then you will need some tool for orchestration of these containers. You can start with docker-compose as IMO the simplest one.
It will launch all your containers and manage their network settings for you by default.
So, imaging you have following docker-compose.yml file for launching your apps:
docker-compose.yml
version: '3'
services:
apache:
image: apache
node:
image: node # or whatever
With such simple configuration you will have host names in your network apache and node. So from inside you node application, you will see apache as apache host.
Just launch it with docker-compose up
make an AJAX request from the [...] website to the node server
The JavaScript, HTML, and CSS that Apache serves up is all read and interpreted by the browser, which may or may not be running on the same host as the servers. Once you're at the browser level, code has no idea that Docker is involved with any of this.
If you can get away with only sending links without hostnames <img src="/assets/foo.png"> that will always work without any configuration. Otherwise you need to use the DNS name or IP address of the host, in exactly the same way you would as if you were running the two services directly on the host without Docker.

External access to Node.JS app, within Docker container

i have a Node app running within a Docker container, hosted on Elastic Beanstalk (single instance). The docker has port 3000 exposed to access the app within the docker, and I can 'curl 172.17.0.32:3000/test' from the host which returns the expected response.
The problem I have is accessing this port externally using the elastic beanstalk url. i.e
http://XXXXXX-env.elasticbeanstalk.com:3000/test
This will time out.. can anyone recommend how to gain access to this port externally?
thanks
Check this for reference
http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk
see what your docker ps command returns.
The ip you have shared looks like private ip address of the docker service used for internal network. You have to enable a bridge between your host and docker container by supplying -p 3000:3000 to the run command and finally enable the app in your elastic console.

Send request from one docker container to another

I'm trying to move some existing servers to be housed within docker containers. I have two: an app server and an api server but developed with node.js. I have them both working within an ubuntu vm and can hit both apps from outside the vm which is great.
Each server has it's own domain. The app server uses the domain app and the api server uses the api domain, clever I know. Locally I added both domains to my hosts file to point to the ip assigned to the ubuntu vm.
The only issue I'm having is there is a request sent from the app server that needs to be routed to the api server. Tried editing the hosts file of both the app server container (via the Dockerfile) and the ubuntu vm however the request fails.
Is there a simple way to get that request to not go out and try to resolve the api domain but get it to point to the api container?
A typical solution to this would be to use Docker's --link option to link the containers. That is, if you do:
docker run -d --name api myapi
docker run -d --name app --link api:api myapp
Then within the app container, the hostname api will map to the api container. You will also have a set of environment variables available that describe the exposed ports on the linked container. E.g., if your "api" container exposed port 80, the variable would look like:
API_PORT_80_TCP=tcp://172.17.0.10:80
API_PORT_80_TCP_PORT=80
API_PORT_80_TCP_PROTO=tcp
API_PORT=tcp://172.17.0.10:80
API_NAME=/app/api
API_PORT_80_TCP_ADDR=172.17.0.10
There are some disadvantages to the link option:
This only works for containers hosted on the same physical host
If you restart the "api" container, you have to restart the "app" container, too.
Both of these particular problems can probably be resolved by the orchestration tool of your choice if you are operating in a multi-host environment.
The linking feature (--link) is a legacy feature.
You should always prefer using Docker network drivers over linking.
Example: running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
$ docker run -d --name redis example/redis --bind 127.0.0.1
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1
See the docs for details.
https://docs.docker.com/compose/link-env-deprecated/
https://docs.docker.com/engine/reference/run/#network-settings

Resources