heroku deploy docker image with github - node.js

I have a nodejs express app serving a site. I deployed it with Heroku, using buildpack/nodejs and Github. Every time i push on Github, Heroku detects the push and runs the npm start script.
The problem is that I need to pass to a Docker image containing the nodejs app. I did it and it works locally, I can run it with docker run -d -p 8000:8000 exporter and it works.
I added the docker.yml file on the root folder and pushed on Github. But heroku still runs the npm script in the package.json, ignoring the docker.yml.
Is there a way to make heroku create the container from the Dockerfile every time I push to Github?

For Heroku to understand your heroku.yml file you need a few things.
First off you need to make sure that the Dockerfile is in the root directory.
Second, you need to ensure you are building and running the docker environment.
Finally, make sure you set your heroku stack to docker.
So, given that we want to ensure the directory tree looks like this:
|-my_app
|-app_contents
|-Dockerfile
|-heroku.yml
|-etc...
And that the heroku.yml file looks something like this:
build:
docker:
web: Dockerfile
run:
web: docker run -d -p 8000:8000 exporte
and finally run this in your heroku repo:
heroku stack:set container
Then just make sure you push your changes up.
If this doesn't help. I would recommend updating your post with the following:
The file tree
The Dockerfile
The heroku.yml file

Thanks to the answer of Taylor Cochran I managed to solve the problem.
I first tried to follow this link: https://devcenter.heroku.com/articles/container-registry-and-runtime
It worked but I had to do it from the cli.
After that I removed the entire project and remade it. I followed the indications of Taylor Cochran and pushed from heroku cli. I saw it worked and I then added the github deploy. And now every time I push on Github the new Docker container is automatically built and deployed by Heroku.
NB: I changed web: docker run -d -p 8000:8000 exporter to npm start

Related

Problem deploying MERN app with Docker to GCP App Engine - should deploy take multiple hours?

I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.

Why is my Docker container not running my Nodejs app?

End goal: To spin up a docker container running my expressjs application on port 3000 (as if I am using npm start).
Details:
I am using Windows 10 Enterprise:
This a very basic, front-end Expressjs application.
It runs fine using npm start – no errors.
Dockerfile I am using:
FROM node:8.11.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY src .
CMD node src/index.js
EXPOSE 3000
Steps:
I am able to create an image, using basic docker build command:
docker build –t portfolio-img .
Running the image (I am using this command from a tutorial www.katacoda.com/courses/docker/deploying-first-container):
docker run -d --name portfolio-container -p 3000:3000 portfolio-img
The container is not running. It is created, since I can inspect it, but it has exited after the command. I am guessing I did something wrong with the last command, or I am not giving the correct instructions in the dockerfile.
If anyone can point me in the right direction, I'd greatly appreciate it.
Already have searched a lot on the docker documentation and on here.

How to publish .Net core docker application from Windows to Linux machine?

I created a .Net core application with Linux docker support using Visual Studio 2017 on a Windows 10 PC with Docker for Windows installed. I can use the following command to run it (a console application)
docker run MyApp
I have another Linux machine with Docker installed. How to publish the .Net core application to the Linux machine? I need to publish and run the dockerized application on the Linux machine.
The linux has the following docker packages installed.
$ sudo yum list installed "*docker*"
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Installed Packages
docker-engine.x86_64 17.05.0.ce-1.el7.centos #dockerrepo
docker-engine-selinux.noarch 17.05.0.ce-1.el7.centos #dockerrepo
There are many ways to do this, just search for any tool for CI/CD.
The easiest way to do it is manually, connect to your Linux server, make a git pull of the code and then run the same commands that you run locally.
Other option is to do a push of your docker image to a container registry, then do a pull in you docker server and you are ready to go
Edit:
You should really take a look to some CI service, for example, in our environment, we use GitLab, when we do a push to master there is a gitlab.yml that builds the project, then do a push:
image: docker:latest
services:
- docker:dind
stages:
- build
api:
variables:
IMAGE_NAME: git.lagersoft.com:4567/gumbo/vtae/api:${CI_BUILD_REF}
stage: build
only:
- master
script:
- docker build -t ${IMAGE_NAME} -f vtae.api/Dockerfile .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN ${IMAGE_NAME}
- docker push ${IMAGE_NAME}
With this we only need to do a pull in our server with the latest version.
It's worth noticing that docker by itself does not handle the publication part, so you need to do it manually or with some tool (any CI tool like gitlab, jenkins, circleci, amazon code pipeline...) if you are starting learning I would recommend to start manually and then integrate some CI tool.
Edit 2
About the Visual Studio tool, I would not recommend to use it for anything else than local development, since yeah, it only works in windows and it only works in visual studio (Rider has integrated just very recently), so, to do the deploy in a linux environment we use our own docker and docker compose files, they are based in the defaults anyway, they are something like this:
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY lagersoft.common/lagersoft.common.csproj lagersoft.common/
COPY vtae.redirect/vtae.redirect.csproj vtae.redirect/
COPY vtae.data/vtae.data.csproj vtae.data/
COPY vtae.common/vtae.common.csproj vtae.common/
RUN dotnet restore vtae.redirect/vtae.redirect.csproj
COPY . .
WORKDIR /src/vtae.redirect
RUN dotnet build vtae.redirect.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish vtae.redirect.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "vtae.redirect.dll"]
This docker file copy all the related projects (I hate the copying part, but is the same as Microsoft do their default file), the do the build and publish the app, on the other hand we have a docker-compose to add some services (this files must be in the solution folder to access all the related projects):
version: '3.4'
services:
vtae.redirect.redis:
image: redis
volumes:
- "./volumes/redirect/redis/data:/data"
container_name: vtae.redirect.redis
vtae.redirect:
image: vtae.redirect
depends_on:
- vtae.redirect.redis
build:
context: .
dockerfile: vtae.redirect/Dockerfile
ports:
- "8080:80"
volumes:
- "./volumes/redirect/data:/data"
container_name: vtae.redirect
entrypoint: dotnet /app/vtae.redirect.dll
With this parts there is only left to do a commit, then a pull in the server and run the docker-compose up command to run our app (you could do it from the docker file directly, but it is easier and more manageable with docker compose.
Edit 3
To make the deployment in the server we use two tools.
First the gitlab ci is run after the commit is done
It makes the build specified in the docker file and pushes it to our Gitlab container registry, same if it was the container registry of amazon, google, azure... etc...
Then it makes a post request to the server in production, this server is running a special tool in a separate port
The server receive the post request and validates it, for this we use this tool (a friend is the repo owner)
The script receive the request, check the login, and if it is valid, then it simply does the pull from our gitlab container registry and run docker-compose up
Notes
The tool is not perfect, we are moving from just docker to use kubernetes, were you can connect to your cluster directly from your machine or some CI integration and do the deploys directly, no matter what solution do you choose, i recommend you that start to see how kubernetes can help you, sadly is one more layer to learn, but it is very promising, were you will be able to publish to almos any cloud or metal painless, with fallbacks, scaling and other stuff.
Also
If you do not want or can not use the container registry (I strongly recommend this way), you can use the same tool, in the .sh that executes it, just do a git pull and then a docker build or docker compose.
The most simple scenario could be to create an script yourself where you do ssh to the server, upload the files as zip and then run it in the server, remember, Ubuntu is in the microsoft store and could run this script, but the other solutions are more "independient" and scalable, so, make your choose!

How to release app to heroku with docker?

I've an application NodeJS and I would like to run it on Heroku.
I use the docker CLI with these command :
docker build -t registry.heroku.com/my-app/web .
docker login --username=_ --password=MYTOKEN registry.heroku.com
docker push registry.heroku.com/my-app/web
All of these commands are running good but my app is not released on heroku.
What is wrong ? Why my app is not released on heroku ?
I cannot use the heroku CLI.
The Heroku Container Runtime won't release images on docker push. That action is required, but only to upload images on the Heroku Platform.
You need to use the heroku container:release command, or the Heroku API to release those new images on your app.
See the Heroku Documentation about releasing docker images.

Access private git repos via npm install in a Docker container

I am in the process of setting up a Docker container that will pull private repos from GitHub as part of the process. At the moment I am using an Access Token that I pass from the command line (will change once build gets triggered via Jenkins).
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
# Dockerfile
# Env Vars
ARG GITHUB_API_TOKEN
ENV GITHUB_API_TOKEN=${GITHUB_API_TOKEN}
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
This works fine and seems to be a secure way of doing this? (though need to check the var GITHUB_API_TOKEN only being available at build time)
I am looking to find out how people deal with ssh keys or access tokens when running npm install and dependencies pull from github
"devDependencies": {
"my-repo": "git#github.com:org/my-repo.git",
"electron": "^1.7.4"
}
At the moment I cannot pull this repo as I get the error Please make sure you have the correct access rights as I have no ssh keys setup in this container
Use the multi-stage build approach.
Your Dockerfile should look something like this:
FROM alpine/git as base_clone
ARG GITHUB_API_TOKEN
WORKDIR /opt
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
FROM <whatever>
COPY --from=base_clone /opt/my-repo /opt
...
...
...
Build:
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
The Github API Token secret won't be present in the final image.
docker secrets is a thing, but it's only available to containers that are part of a docker swarm. It is meant for handling things like SSH keys. You could do as the documentation suggests and create a swarm of 1 to utilize this feature.
docker-compose also supports secrets, though I haven't used them with compose.

Resources