How to use the container name in rancher inside the application code running within the container - node.js

Having a nodejs application running inside a docker container orchestrated by Rancher. Using appdynamics to monitor the service. For that we need to add few configuration parameters in main js file. One of the configuration has to be unique for all the containers running for this service. So, i would like to pass the rancher container name as that configuration.

Related

NodeJs + Puppeteer on Azure App Services fails to run

I've wrote a simple NodeJs (ExpressJs) server that uses Puppeteer to generate PDF files on JSON data being passed to it. While locally everything is working like charm, I'm struggling to run this server on Azure App Services.
I've created a Resource group, within it I've created an App Servces instance (running on Linux) that is connected to my repo at Azure DevOps (via the Deployment Center).
My server has two endpoints:
/ - returns a JSON - { status: "ok" }. I'm using this to validate the running server instance.
/generate-pdf - uses the Puppeteer to generate and return a PDF file.
After successfully starting the App Service instance I'm able to access the "/" route and get a valid response but upon accessing the "/generate-pdf" route the result is "502 - Bad Gateway".
Does my instance require some additional configuration that I haven't done?
Does App Services can not run Puppeteer? Perhaps there is a different service on Azure that I need to use?
Is there a way to automate the process via the Azure DevOps pipeline or release?
Any questions/thoughts/FAQs are more than welcomed. Thanks =)
I'm answering my own question: as was mentioned here https://stackoverflow.com... the Azure App Services does not allow the use of GDI (which is required by Chrome) regardless if you're using Linux or Windows based system. The solution was to put the NodeJs application into a Docker container and manually install Chrome. Once you have a container - just upload it to the Azure App Services and viola!
By default App Servies exposes 80 and 443 ports, so if your application listens on a different port be sure to specify it via the WEBSITES_PORT environment variable.
In my case, I had to upload the Docker image to the Docker hub but you can also set up a pipeline to automate the process.
I've built the Docker image on my M1 Pro and it led to some arch issues when the container was uploaded to Azure. Be sure to add the --platform linux/amd64 in the image-building step if you're building for Linux.

How to link 2 docker containers one with backend code and other with react both are on node.js

I have 2 docker containers running one with node which is my backend code and another with react build running on node, how do I link between the containers so that my react API calls are fulfilled via my node backend container.
I'm running the containers using ECS and routing using ALB, both my containers are in different target groups and the same cluster.
Actual Result:-
Both the containers are running individually but there is no response to the API calls made using react.
Expected Result:-
API requests should be fulfilled using the other node container.
There is no way to link two different task definition, You should run both containers in one task definition. you can only link container if it is the single task definition otherwise you need to use service discovery or internal load balancer if the there is two different task definition and having two different target group so better to use an internal load balancer or service discovery.
If you want to configure in the same task definition then you can refer or access the container using container name along with port. But I will not suggest docker linking in case of AWS I think such an approach is not good for production.
One of the biggest disadvantages is that if you want to scale one container in the service, all the container will be the scale of the task definition.
You can link this way, Go to task definition -> Container -> Networking -> just add the name of the second container as pasted in the image.
You can explore here and here further about service discovery and load balancer.

Docker compose, heroku, hostname links and production deployment

I currently have a simple app consisting of a few micro services (database, front-end node app, user services, etc.) each with its own Dockerfile, and a docker-compose.yml file to get them all up on a local deployment environment. So everything works fine doing docker-compose up.
For production, I was looking for a Heroku (open to other PaaS), which do not support Docker Compose. Not specially nice, but could live with it for now.
The thing is that with Docker Compose on local deployment, the different services are linked via its hostname automatically (if the mongo database service is called "mydatabase", I can do mongodb://mydatabase/whatever within my other services).
So, the question is, what happens with those links on Heroku? What are the best practices to have the different services linked consistently between development and production in this case?
Thanks!
Docker compose creates a docker virtual network which allows you to connect the containers using the container name as a hostname. Heroku doesn't directly support docker-compose, as Docker compose is really intended for
local development on your own machine and not for production.
For production Docker has Docker swarm, which is very similar to Docker compose, however is intended for production environments. You can use the same docker-compose file (called stackfile in swarm) to deploy on swarm.
In docker swarm, you can connect the containers that you have using the same service name just like you would do in docker-compose.
Heroku supports Docker swarm via the DockerHero add-on which you can use to to have your Docker container connected and running on Heroku.
In case anyone else comes across this in their current searches for solutions, Heroku offers an approach using a file similar to docker-compose.yml, called heroku.yml. You simply put it in the root of your project and structure it to call your Dockerfiles: https://devcenter.heroku.com/articles/build-docker-images-heroku-yml

Configuration for application in docker container on cluster

How can I deploy an application in a docker container on a cluster of machines and configure that application with settings like database username and password and other application specific settings. without putting the settings in the container as config file and without placing the settings on de machine, because the machine is recyclable. Also Environment variables are no options because these are visible in logs and not really suited for passwords and private keys imo.
The application is a Node.js application when developing I run it with a JSON config file. The production environment will exists of multiple machines in a AWS ECS environment. The machines all run docker in a cluster and the application it self is a docker image, and multiple instances of the application will run with an load balancer dividing the load between the instances.
What you are looking for is Docker swarm which is responsible for running and mannaging containers running on a cluster of machines. Docker swarm has a very nice feature for securing configuration such as passwords ... called Docker secrets.
You can create docker secrets for usernames and passwords, and those secrets will be shared among the containers in the cluster in an encrypted and secure way.

Azure web app for containers deploy with custom docker run command

Consider a docker image containing a React UI and a Spring REST app. I'd like to deploy this to Azure web app for containers, where the URL for the instance hits the UI which is being statically served on port 5000 while the Spring app is listening on 8080. The UI communicates with the Spring app via HTTP, hence the requests made by the UI to the Spring app are evaluated on the user's machine (i.e. can't access the Spring app via localhost:8080). However, port 8080 is not mapped in the default run command. Another issue is that there is only one URL for the web app.
The default run command is: (from logging in via FTP and examining docker logs)
docker run -d -p <WEB_APP_PORT>:<UI_PORT> --name ... -e ... <IMG>
Can I run a custom docker run command to expose the UI_PORT and the SPRING_PORT and also set up one web app with two URLs?
If not, are there alternative solutions?
For context:
The final image is built by extending an image which contains only the Spring app (i.e. FROM openjdk:8-jdk-alpine) and installing node and the UI.
An entrypoint.sh script start both the UI and the SPRING APP
The ports exposed in the image are 8080 and 5000.
A diagram of what I'm trying to achieve:
No, you can't do what you want with "Azure web app for containers", that platform lets you run a single container image that is mapped to ONLY one URL, and you can ONLY export web ports (80, 443) to the world, and SSH (2222) to their internal "kudzu" service.
Being "purist", you are describing a "microservice stack", so you have to go with a full container orchestration, like "Azure Container Service" (AKS, using Kubernetes), or "Azure Service Fabric" (which looks it will be awesome when they reach their goals).
However, you could get it done by internally running a "mapping service", like an Nginx proxy which would send "/" to the localhost:8080 UI and /api to localhost:5000 Spring API, or any of the techniques traditionally used for Single-page-Application "routing".
It's a decision between putting all your services inside a single container behind a single URL (microservice in a container) or putting every process in a container on a container orchestration platform (the former is cheaper in time and cost of running it, the later is more "elegant" and flexible but requires more time to build the management and is more expensive to run).

Resources