Docker compose, heroku, hostname links and production deployment - node.js

I currently have a simple app consisting of a few micro services (database, front-end node app, user services, etc.) each with its own Dockerfile, and a docker-compose.yml file to get them all up on a local deployment environment. So everything works fine doing docker-compose up.
For production, I was looking for a Heroku (open to other PaaS), which do not support Docker Compose. Not specially nice, but could live with it for now.
The thing is that with Docker Compose on local deployment, the different services are linked via its hostname automatically (if the mongo database service is called "mydatabase", I can do mongodb://mydatabase/whatever within my other services).
So, the question is, what happens with those links on Heroku? What are the best practices to have the different services linked consistently between development and production in this case?
Thanks!

Docker compose creates a docker virtual network which allows you to connect the containers using the container name as a hostname. Heroku doesn't directly support docker-compose, as Docker compose is really intended for
local development on your own machine and not for production.
For production Docker has Docker swarm, which is very similar to Docker compose, however is intended for production environments. You can use the same docker-compose file (called stackfile in swarm) to deploy on swarm.
In docker swarm, you can connect the containers that you have using the same service name just like you would do in docker-compose.
Heroku supports Docker swarm via the DockerHero add-on which you can use to to have your Docker container connected and running on Heroku.

In case anyone else comes across this in their current searches for solutions, Heroku offers an approach using a file similar to docker-compose.yml, called heroku.yml. You simply put it in the root of your project and structure it to call your Dockerfiles: https://devcenter.heroku.com/articles/build-docker-images-heroku-yml

Related

NodeJs + Puppeteer on Azure App Services fails to run

I've wrote a simple NodeJs (ExpressJs) server that uses Puppeteer to generate PDF files on JSON data being passed to it. While locally everything is working like charm, I'm struggling to run this server on Azure App Services.
I've created a Resource group, within it I've created an App Servces instance (running on Linux) that is connected to my repo at Azure DevOps (via the Deployment Center).
My server has two endpoints:
/ - returns a JSON - { status: "ok" }. I'm using this to validate the running server instance.
/generate-pdf - uses the Puppeteer to generate and return a PDF file.
After successfully starting the App Service instance I'm able to access the "/" route and get a valid response but upon accessing the "/generate-pdf" route the result is "502 - Bad Gateway".
Does my instance require some additional configuration that I haven't done?
Does App Services can not run Puppeteer? Perhaps there is a different service on Azure that I need to use?
Is there a way to automate the process via the Azure DevOps pipeline or release?
Any questions/thoughts/FAQs are more than welcomed. Thanks =)
I'm answering my own question: as was mentioned here https://stackoverflow.com... the Azure App Services does not allow the use of GDI (which is required by Chrome) regardless if you're using Linux or Windows based system. The solution was to put the NodeJs application into a Docker container and manually install Chrome. Once you have a container - just upload it to the Azure App Services and viola!
By default App Servies exposes 80 and 443 ports, so if your application listens on a different port be sure to specify it via the WEBSITES_PORT environment variable.
In my case, I had to upload the Docker image to the Docker hub but you can also set up a pipeline to automate the process.
I've built the Docker image on my M1 Pro and it led to some arch issues when the container was uploaded to Azure. Be sure to add the --platform linux/amd64 in the image-building step if you're building for Linux.

Is it possible to deploy a devcontainer as a private ACI?

Ive been attempting to deploy and connect remotely to an azure container instance running in a private network in azure (with a VPN set up).
I have no problem accessing the container using the aci docker context or directly using the exposed services (I have http and vnc set up on the container).
However, the end goal is to use the container as a remote visual-studio-code development container - with a git repo mounted on the container.
Having trouble figuring out how to do this... From reading the docs it almost seems as if setting up ssh would be the only way to do this but then it seems like I have to set up my own docker host instead of creating the docker as an ACI.
Has anyone done this before? Is it possible?

Keycloak docker

how can I deploy Keycloak docker in an azure container instance?
keycloak docker image that is provided by jboss/keyckloak keeps restarting in azure container instance after deployment. need help
You don`t need to deploy keycloak in the Azure container registry. You can use jboss/keyckloak docker image. In my experience, reload might happen in case of resource luck or wrong configurations. Try to pick a bigger VM.

How to deploy realm object server to cloud foundry

https://realm.io/docs/get-started/installation/developer-edition/
Seems like realm object server is base on nodejs, however I could not make it working.
By far, I could deploy it to cloud foundry successfully, but it just don't work.
Realm studio just hangs there.
apps to cloud foundry are deployed with the cf push command and they will run inside Linux Container. The contents of the linux container will put together by a buildpacks. Since realm is a NodeApp you should look at the NodJS buildpack documentation https://docs.cloudfoundry.org/buildpacks/node/node-tips.html
Apps running on Cloud Foundry container have an ephemeral disk therefore if your realm application needs a persistent disk you should not run it on a container in Cloud Foundry. Unless you have a Service Broker that can mount NFS volumes into the container.

Can I deploy a docker container to Azure Webapp

I found a lot of resources online on using docker with Azure virtual machines. But didn't find any on using docker with Azure Webapps. Is this possible?
Things are changing fast in the cloud. Since November 2016, it is now possible on linux web apps to run docker containers. You can read about it here. https://buildazure.com/2016/11/18/deploy-docker-containers-to-azure-web-apps-on-linux
No. Web Apps are a Platform-as-a-Service from Azure which use a bespoke packaging and runtime, you can't just create a Web App, point it to a Docker image and run it as a container.
If you want to run Docker on Azure, you can spin up a VM to use as a single host - you can use an Ubuntu image for Linux containers or a Windows Server 2016 image for Windows containers (currently in preview).
To create a Docker Swarm running on Azure you can use Azure Container Service or Docker for Azure.
No, because a webapp is a PaaS that Azure provides, that allows you to push a website with there settings (to simplify it) with ease.
But, you can push a website onto docker using Azure.
Web Apps, running under App Service, is Webapp-as-a-Service. Nothing to do with Docker. It has its specific methods of code deployment built-in, and does not support Docker images.
That's not to say you cannot have Web Apps and Docker images communicate between each other - you can certainly have, say, your database in a Docker container, being called by your Web App.

Resources