unauthorized: authentication required error while creating docker nodejs image - node.js

I am new to docker and trying to build docker image for my nodejs project.
This is my docker file
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8443
CMD ["node", "index.js"]
Command used to build the image
docker build -r test-project .
It is giving unauthorized: authentication required after few minutes
Step 1/7 : FROM node:10
10: Pulling from library/node
76b8ef87096f: Extracting [====================> ] 18.81MB/45.38MB
2e2bafe8a0f4: Download complete
b53ce1fd2746: Download complete
84a8c1bd5887: Downloading [==============================> ] 30.38MB/49.79MB
7a803dc0b40f: Downloading [========> ] 34.34MB/214.3MB
b800e94e7303: Downloading
0da9fbf60d48: Waiting
04dccde934cf: Waiting
73269890f6fd: Waiting
**unauthorized: authentication required**
I have done authentication before starting build using
docker login -u <username>
Please help with authentication error while building docker image.

Couple of resolutions -
Try a docker logout. It's possible your authentication with Docker Hub has expired, and your token is being sent along with the pull request, even though these images are public and the token is not needed.
Check for updates and update docker
Logout and login again. (If you are working on windows verify with "docker login")
Check the host clock. It should be properly set to the current time
If docker is behind proxy, try commenting proxy and do sudo systemctl daemon-reload and sudo systemctl restart docker
After you do all the above steps, please try docker pull hello-world. Check if that works
On windows you need to make sure that -
You've shared your drive You've added COMPOSE_CONVERT_WINDOWS_PATHS
with value of 1 on you system variables. Try logging out then in from
your terminal (When doing it on Docker for Windows - sometimes the
issue persists).
Check your network connection. Can be a VPN issue.

Related

Fly.io Launch Issue: "Error failed to fetch an image or build from source: error building: error during connect: Post..."

I am new to Docker and Fly.io, and trying to get a very basic Nodejs backend hosted, but running into an error. You can see my repo here. Locally, I've added a Dockerfile to backend/ that looks like this:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
EXPOSE 5000
CMD ["node", "app.js"]
Then, in WSL2, I ran docker build . and docker run -dp 5000:5000 [image ID]. The backend is showing at localhost:5000 as well as feeding data to the frontend correctly.
But when I run flyctl launch, it keeps giving me Error failed to fetch an image or build from source: error building: error during connect: Post "http://[a very long URL]": EOF.
Someone suggested that the auto-generated fly.toml defaulting to internal_port = 8080 was the issue, so I tried changing it to match Express and Docker with 5000, but got the same error.
Just in case: I have a bad Internet connection and I don't know if that could be the problem--a timeout?
Can someone help me??
I was able to fix this and I'm posting it here for others who I saw had the same issue.
I have read that you need a host of '0.0.0.0' in the app.listen, so that was there (didn't solve my issue but maybe someone else's)
destroyed the fly attempted build
deleted the fly.toml
deleted the Docker container and image
changed my backend port to 8080 across my project (the auto-generated fly.toml makes internal_port = 8080 no matter what the Dockerfile says)
remade the Docker container and image
ran flyctl launch again
My guess is that the problem was that just altering the 5000 to 8080 in the fly.toml file that had been made during a failed build was not enough. It needed to be correct from the start.

how to build node based dockerized jenkins-slave that can run the node base project through Jenkins master

I have successfully enabled the Docker API which I can able to connect from the Jenkins. And Now I'm trying to create docker slave agent that can dynamically create and want to create 100 active docker slave agent that can immediately pick the jobs from queue and execute.
I just trying to create a node base docker image which can be act as slave agent and the image file look like below:
Image Context:
FROM node:15.12.0-alpine3.10
RUN mkdir -p /home/achu/nodeSlave
CMD ["node", "npm --version"]
Output:
[ArrchanaMohan#devops-monitoring-achu ~]$ sudo docker build -t docker-slave-nodes:1.0 .
Sending build context to Docker daemon 7.368GB
The GB count is keep increasing and I don't see image is build success message. I'm very new to the docker world, and I'm not sure whether I'm doing right things.
Can someone please help me to resolve this issue.
Thanks in advance.
Updated:
Problem Image:
You need to use slave image and install node on it.
Here is the one of them;
FROM openshift/jenkins-slave-base-centos7:v3.11

docker RUN/CMD is possibly not executed

I'm trying to build a docker file in which I first download and install the Cloud SQL Proxy, before running nodejs.
FROM node:13
WORKDIR /usr/src/app
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
COPY . .
RUN npm install
EXPOSE 8000
RUN cloud_sql_proxy -instances=[project-id]:[region]:[instance-id]=tcp:5432 -credential_file=serviceaccount.json &
CMD node index.js
When building the docker file, I don't get any errors. Also, the file serviceaccount.json is included and is found.
When running the docker file and checking the logs, I see that the connection in my nodejs app is refused. So there must be a problem with the Cloud SQL proxy. Also, I don't see any output of the Cloud SQL proxy in the logs, only from the nodejs app. When I create a VM and install both packages separately, it works. I get output like "ready for connections".
So somehow, my docker file isn't correct, because the Cloud SQL proxy is not installed or running. What am I missing?
Edit:
I got it working, but I'm not sure this is the correct way to do.
This is my dockerfile now:
FROM node:13
WORKDIR /usr/src/app
COPY . .
RUN chmod +x wrapper.sh
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN npm install
EXPOSE 8000
CMD ./wrapper.sh
And this is my wrapper.sh file:
#!/bin/bash
set -m
./cloud_sql_proxy -instances=phosphor-dev-265913:us-central1:dev-sql=tcp:5432 -credential_file=serviceaccount.json &
sleep 5
node index.js
fg %1
When I remove the "sleep 5", it does not work because the server is already running before the connection of the cloud_sql_proxy is established. With sleep 5, it works.
Is there any other/better way to wait untill the first command is completely done?
RUN commands are used to do stuff that changes something in the file system of the image like installing packages etc. It is not meant to run a process when the you start a container from the resulting image like you are trying to do. Dockerfile is only used to build a static container image. When you run this image, only the arguments you give to CMD instruction(node index.js) is executed inside the container.
If you need to run both cloud_sql_proxy and node inside your container, put them in a shell script and run that shell script as part of CMD instruction.
See Run multiple services in a container
You should ideally have a separate container per process. I'm not sure what cloud_sql_proxy does, but probably you can run it in its own container and run your node process in its own container and link them using docker network if required.
You can use docker-compose to manage, start and stop these multiple containers with single command. docker-compose also takes care of setting up the network between the containers automatically. You can also declare that your node app depends on cloud_sql_proxy container so that docker-compose starts cloud_sql_proxy container first and then it starts the node app.

NodeJS in Docker doesn't see connection

I have a NodeJS/Vue app that I can run fine until I try to put it in a Docker container. I am using project structure like:
When I do npm run dev I get the output:
listmymeds#1.0.0 dev /Users/.../projects/myproject
webpack-dev-server --inline --progress --config build/webpack.dev.conf.js
and then it builds many modules before giving me the message:
DONE Compiled successfully in 8119ms
I Your application is running here: http://localhost:8080
then I am able to connect via browser at localhost:8080
Here is my Dockerfile:
FROM node:9.11.2-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD npm run dev
EXPOSE 8080
I then create a docker image with docker build -t myproject . and see the image listed via docker images
I then run docker run -p 8080:8080 myproject and get a message that my application is running here: localhost:8080
However, when I either use a browser or Postman to GET localhost:8080 there is no response.
Also, when I run the container from the command line, it appears to lock up so I have to close the terminal. Not sure if that is related or not though...
UPDATE:
I trying following the Docker logs such as
docker logs --follow
and there is nothing other than the last line that my application is running on localhost:8080
This would seem to indicate that my http requests are never making into my container right?
I also tried the suggestion to
CMD node_modules/.bin/webpack-dev-server --host 0.0.0.0
but that failed to even start.
It occurred to me that perhaps there is a Docker network issue, perhaps resulting in an earlier attempt at kong api learning. So I run docker network ls and see
NETWORK ID NAME DRIVER SCOPE
1f11e97987db bridge bridge local
73e3a7ce36eb host host local
423ab7feaa3c none null local
I have been unable to stop, disconnect or remove any of these networks. I think the 'bridge' might be one Kong created, but it won't let me whack it. There are no other containers running, and I have deleted all images other than the one I am using here.
Answer
It turns out that I had this in my config/index.js:
module.exports = {
dev: {
// Various Dev Server settings
host: 'localhost',
port: 8080,
Per Joachim Schirrmacher excellent help, I changed host from localhost to 0.0.0.0 and that allowed the container to receive the requests from the host.
With a plain vanilla express.js setup this works as expected. So, it must have something to do with your Vue application.
Try the following steps to find the source of the problem:
Check if the container is started or if it exits immediately (docker ps)
If the container runs, check if the port mapping is set up correctly. It needs to be 0.0.0.0:8080->8080/tcp
Check the logs of the container (docker logs <container_name>)
Connect to the container (docker exec -it <container_name> sh) and check if node_modules exists and contains all
EDIT
Seeing your last change of your question, I recommend starting the container with the -dit options: docker run -dit -p 8080:8080 myproject to make it go to the background, so that you don't need to hard-stop it by closing the terminal.
Make sure that only one container of your image runs by inspecting docker ps.
EDIT2
After discussing the problem in chat, we found that in the Vue.js configuration there was a restriction to 'localhost'. After changing it to '0.0.0.0', connections from the container's host system are accepted as well.
With Docker version 18.03 and above it is also possible to set the host to 'host.docker.internal' to prevent connections other than from the host system.

Docker - no such file or directory

I'm receiving an error from docker when I run my docker file. It's saying the /var/lib/docker/aufs/layers/xxxx: no such file or directory when I run Docker build .
I have tried numerous ways to remove containers and images so I'm pretty much stock on this one.
Any
The Docker file is:
FROM node:6
RUN git clone https://github.com/preboot/angular2-webpack.git
WORKDIR angular2-webpack
RUN sed -i.bak 's/--port 8080/--host 0.0.0.0 --port 8080/'
package.json RUN npm i
CMD [ "npm", "run", "start" ]
The complete console output is:
Sending build context to Docker daemon
9.728 kB
Step 1 : FROM node:6
6: Pulling from library/node
6a5a5368e0c2: Already exists
7b9457ec39de: Already exists
ff18e19c2db4: Already exists
6a3d69edbe90: Already exists
0ce4b037e17f: Already exists
82252a100d5a: Already exists
Digest:
sha256:db245bde5445eb122d8dc090ba98539a9ef7f56c0ea981ade643695af0d8eaf0
Status: Downloaded newer image for node:6
---> 9873603dc506 Step 2 :
RUN git clone https://github.com/preboot/angular2-webpack.git open
/var/lib/docker/aufs/layers/9319fd93cb6d6718243ff2e65ce5d2aa6122a1bb9211aa9f8e88d85c298727e5:
no such file or directory User:docker-test
Edit
The issue was resolved thanks to #BMitchs' recommendation:
rm -rf /var/lib/docker/*
Uninstall Docker completely
re install docker
With that sort of corruption, I'd give a full docker wipe a try, rm -rf /var/lib/docker/*. Before doing that, backup any data (volumes), then shutdown docker, and you'll need to pull or rebuild all your images again. If there are still problems with aufs, try changing the filesystem driver, e.g. changing to dockerd -s overlay2 in your service startup.
It doesn't hurt to check for common issues, like running out of disk space or old version of the application, first.
try building the image again on a clean machine or using the --no-cache flag, this seems like a caching issue.
Also - In my company, we clone the code into the machine building the image, and then copy the code into the container. In my opinion - it's a better solution, but I think it's a matter of taste.
The data files used by Docker are corrupted. You can execute the following command:
1- If they exist, delete contain and image
docker rm CONTAINER ID
docker rmi IMAGE ID
2- Stop the Docker service (Ubuntu)
service docker stop
3- Start the Docker service (Ubuntu)
service docker start
4- Check Docker service status (Ubuntu)
service docker status
docker system prune -af
worked for me

Resources