Can not access Nodejs app which deployed on container of Bluemix - node.js

I create an image from cf ic command line expose 4000 then install nodejs,npm via apt-get and deploy a nodejs app on the image listen on port 4000.
And then I create a container with this image ,assign a public_ip to this container and run it.
But I found that I can not access the nodejs app with port [ http://public_ip4000 ].
When I login into container with command line cf ic exec -it container_id bash,I found that the nodejs app is running and I can access nodejs app by curl -GET http://localhost:4000/
Error Message is :net::ERR_CONNECTION_TIMED_OUT
Q: How can I access my nodejs app outside container?

Port 4000 is not exposed to the IBM Containers firewall.
There is a limited set of ports exposed, so I suggest you try a different port like 3000 or 5000. The complete list is not published for security reasons.
Alternatively you can create a Container Group with a single container. In that case you can define a route (domain) for your container that will automatically route all requests internally to your container port (like 4000).
You can find more details about creating container groups in Bluemix documentation:
https://console.ng.bluemix.net/docs/containers/container_creating_ov.html#container_group_ov

Related

Unable to communicate between docker and external service

I have a docker container running in localhost:3000
Also, I have a node app running in localhost:8081
Now if I want to make post or get request from localhost:3000 to localhost:8001 its not working at all.
Now if run the service as a binary (not a docker file) on localhost:3000 the same API requests works.
How do I communicate if using docker?
Each container runs on it's own bridge network where localhost means the container itself.
To access the host, you can add the option --add-host=host.docker.internal:host-gateway to the docker run command. Then you can access the host using the host name host.docker.internal. I.e. if you have a REST service on the host, your URL would look something like http://host.docker.internal:8081/my/service/endpoint.
The name host.docker.internal is the common name to use for the host, but you can use any name you like.
You need docker version 20.10 for this to work.

Azure container App fails to start with Julia image

I am trying to run Julia code in azure container app, but it fails for some reason.
docker file is pretty simple:
FROM julia:1.7-bullseye
COPY . /
EXPOSE 8080
RUN julia --project='/' -e 'using Pkg; Pkg.build()'
ENTRYPOINT julia --project='/' main.jl "0.0.0.0" "8080"
"0.0.0.0" and "8080" there is the host and port on which HTTP is served:
HTTP.serve(GENERAL_ROUTER, host, port_restful_api)
The app supposed to run a simple HTTP server that is served over port 8080, GET to / is the health check that returns simple "Up"
When I run that same image locally with docker run -p 8080:8080 my-test:latest it works fine and hitting localhost:8080 gives me my expected response
But when I have it in container app - revision fails to be provisioned (and no extra errors provided). In the ingress I added availability to accept traffic from anywhere and target port is 8080...
What I am missing?
UPDATE
At the same time in log analytics I can see my output to console that should indicate that the app is supposedly running fine, e.g.
[ Info: API handler service: Running on thread 1
In Julia those are done with
#info "API Server: Running on thread $(Threads.threadid())"
But in logs those are in stderr stream. Could it be that the ACA treats any output to stderr as startup error?
UPDATE 2:
The very same code works without any issues or changes on Azure Container Instance and in Azure Kubernetes Service...
So for now I stopped investigating and decided to use ACI.
You should verify if any process inside the julia image needs root privileges as ACA do not support running privileged containers based on the docs:
Azure Container Apps can't run privileged containers. If your program
attempts to run a process that requires root access, the application
inside the container experiences a runtime error.
Did you specify 8080 as your target port when configuring the HTTP ingress of your Container App? https://learn.microsoft.com/en-us/azure/container-apps/ingress?tabs=bash
The platform will try to ping your app at the specified port due to the default health probes. If it receives no response, it will restart your app: https://learn.microsoft.com/en-us/azure/container-apps/health-probes?tabs=arm-template

Service inside container not receiving requests

I am running a container with :
docker run --env-file .env <container_id> -p 0.0.0.0:8080:8080
The service inside the container is a python flask service, built with the library connexion, and listening on 0.0.0.0:8080.
When I docker ls, the column PORTS shows 8080/tcp, and I am not able to request the service, eg from swagger.
How can I request the service successfully ? Is 8080/tcp in the PORTS column means that my container is not bind to any port on the host machine ?

External access to Node.JS app, within Docker container

i have a Node app running within a Docker container, hosted on Elastic Beanstalk (single instance). The docker has port 3000 exposed to access the app within the docker, and I can 'curl 172.17.0.32:3000/test' from the host which returns the expected response.
The problem I have is accessing this port externally using the elastic beanstalk url. i.e
http://XXXXXX-env.elasticbeanstalk.com:3000/test
This will time out.. can anyone recommend how to gain access to this port externally?
thanks
Check this for reference
http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk
see what your docker ps command returns.
The ip you have shared looks like private ip address of the docker service used for internal network. You have to enable a bridge between your host and docker container by supplying -p 3000:3000 to the run command and finally enable the app in your elastic console.

Send request from one docker container to another

I'm trying to move some existing servers to be housed within docker containers. I have two: an app server and an api server but developed with node.js. I have them both working within an ubuntu vm and can hit both apps from outside the vm which is great.
Each server has it's own domain. The app server uses the domain app and the api server uses the api domain, clever I know. Locally I added both domains to my hosts file to point to the ip assigned to the ubuntu vm.
The only issue I'm having is there is a request sent from the app server that needs to be routed to the api server. Tried editing the hosts file of both the app server container (via the Dockerfile) and the ubuntu vm however the request fails.
Is there a simple way to get that request to not go out and try to resolve the api domain but get it to point to the api container?
A typical solution to this would be to use Docker's --link option to link the containers. That is, if you do:
docker run -d --name api myapi
docker run -d --name app --link api:api myapp
Then within the app container, the hostname api will map to the api container. You will also have a set of environment variables available that describe the exposed ports on the linked container. E.g., if your "api" container exposed port 80, the variable would look like:
API_PORT_80_TCP=tcp://172.17.0.10:80
API_PORT_80_TCP_PORT=80
API_PORT_80_TCP_PROTO=tcp
API_PORT=tcp://172.17.0.10:80
API_NAME=/app/api
API_PORT_80_TCP_ADDR=172.17.0.10
There are some disadvantages to the link option:
This only works for containers hosted on the same physical host
If you restart the "api" container, you have to restart the "app" container, too.
Both of these particular problems can probably be resolved by the orchestration tool of your choice if you are operating in a multi-host environment.
The linking feature (--link) is a legacy feature.
You should always prefer using Docker network drivers over linking.
Example: running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
$ docker run -d --name redis example/redis --bind 127.0.0.1
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1
See the docs for details.
https://docs.docker.com/compose/link-env-deprecated/
https://docs.docker.com/engine/reference/run/#network-settings

Resources