Dockerfile ADD tar.gz does not extract on ubuntu VM with Docker - linux

I have a Docker Image which I want to build and when I run the build command on my Windows and Mac Docker it works fine and builds correctly, but if I run the same Dockerfile-Build on a Ubuntu-Server VM with docker I get an error.
The critical part of my Dockerfile is:
[...]
# Dependencies
RUN apt-get update && apt-get install -y apt-utils curl git tar gzip
# Install Go
ENV GO_VERSION 1.8
WORKDIR /tmp
ADD https://storage.googleapis.com/golang/go$GO_VERSION.linux-amd64.tar.gz ./
RUN mv go /usr/local/
[...]
But on the Ubuntu-server VM is fails at the RUN mv go /usr/local/-step
And producing the following error:
Step 10/24 : RUN mv go /usr/local/
---> Running in 6b79a20769eb
mv: cannot stat ‘go’: No such file or directory
And I suppose it does not extract the downloaded tar.gz correctly (but the download works)
Do you guys have any idea?

This is a known issue with 17.06 and patched in 17.06.1. The documented behavior is to download the tgz but not unpack it when pulling from a remote URL. Automatically unpacking the tgz was an unexpected change in behavior in 17.06 that they reverted back to only downloading the tgz in 17.06.1.
Release notes for 17.06 (see the note at the top): https://github.com/docker/docker-ce/releases/tag/v17.06.0-ce
Release notes for 17.06.01: https://github.com/docker/docker-ce/releases/tag/v17.06.1-ce
Issue: https://github.com/moby/moby/issues/33849
PR of Fix: https://github.com/docker/docker-ce/pull/89
Edit, the minimize the number of layers in your image, I'd recommend doing the download, unpack, and cleanup as a single RUN command in your Dockerfile. E.g. here are two different Dockerfiles:
$ cat df.tgz-add
FROM busybox:latest
ENV GO_VERSION 1.8
WORKDIR /tmp
ADD https://storage.googleapis.com/golang/go$GO_VERSION.linux-amd64.tar.gz ./
RUN tar -xzf go$GO_VERSION.linux-amd64.tar.gz \
&& rm go$GO_VERSION.linux-amd64.tar.gz
CMD ls -l .
$ cat df.tgz-curl
FROM busybox:latest
ENV GO_VERSION 1.8
WORKDIR /tmp
RUN wget -O go$GO_VERSION.linux-amd64.tar.gz https://storage.googleapis.com/golang/go$GO_VERSION.linux-amd64.tar.gz \
&& tar -xzf go$GO_VERSION.linux-amd64.tar.gz \
&& rm go$GO_VERSION.linux-amd64.tar.gz
CMD ls -l .
The build output is truncated here...
$ docker build -t test-tgz-add -f df.tgz-add .
...
$ docker build -t test-tgz-curl -f df.tgz-curl .
...
They run identically:
$ docker run -it --rm test-tgz-add
total 4
drwxr-xr-x 11 root root 4096 Aug 31 20:27 go
$ docker run -it --rm test-tgz-curl
total 4
drwxr-xr-x 11 root root 4096 Aug 31 20:29 go
However, doing a single RUN to download, build, and cleanup saves you the 80MB of download from your layer history:
$ docker images | grep test-tgz
test-tgz-curl latest 2776133659af 30 seconds ago 269MB
test-tgz-add latest d625455998ff 2 minutes ago 359MB

Related

smbnetfs - How to resolve Input/Output error while writing file to Windows Server share

I am using smbnetfs within a Docker container (running on Ubuntu 22.04) to write files from my application to a mounted Windows Server share. Reading files from the share is working properly, but writing files via smbnetfs gives me a headache. My Haskell application crashes with an Input/output error while writing files to the mounted share. Just 0KB files without any content are written. Apart from the application I've the same problem if I try to write files from the containers bash terminal or from Ubuntu 22.04 directly. So I assume that the problem is not related to Haskell and/or Docker. Therefore let's focus on creating files via bash within a Docker container in this SO question here.
Within the container I've tried the following different possibilities to write files, some with success and some non-success:
This works:
Either touch <mount-dir>/file.txt => 0KB file is generated. Editing the file with nano works
properly.
Or echo "demo content" > <mount-dir>/file.txt works also.
(Hint: Consider the redirection operator)
Creating directories with mkdir -p <mount-dir>/path/to/file/ is also working without any problems.
These steps do not work:
touch <mount-dir>/file.txt => 0KB file is generated properly.
echo "demo-content" >> <mount-dir>/file.txt => Input/output error
(Hint: Consider the redirection operator)
Configuration
Following my configuration:
smbnetfs
smbnetfs.conf
...
show_$_shares "true"
...
include "smbnetfs.auth"
...
include "smbnetfs.host"
smbnetfs.auth
auth "<windows-server-fqdn>/<share>" "<domain>/<user>" "<password>"
smbnetfs.host
host <windows-server-fqdn> visible=true
Docker
Here the Docker configuration.
Docker run arguments:
...
--device=/dev/fuse \
--cap-add SYS_ADMIN \
--security-opt apparmor:unconfined \
...
Dockerfile:
FROM debian:bullseye-20220711-slim#sha256:f52f9aebdd310d504e0995601346735bb14da077c5d014e9f14017dadc915fe5
ARG DEBIAN_FRONTEND=noninteractive
# Prerequisites
RUN apt-get update && \
apt-get install -y --no-install-recommends \
fuse=2.9.9-5 \
locales=2.31-13+deb11u3 \
locales-all=2.31-13+deb11u3 \
libcurl4=7.74.0-1.3+deb11u1 \
libnuma1=2.0.12-1+b1 \
smbnetfs=0.6.3-1 \
tzdata=2021a-1+deb11u4 \
jq=1.6-2.1 && \
rm -rf /var/lib/apt/lists/*
# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# Copy runtime artifacts
WORKDIR /app
COPY --from=build /home/vscode/.local/bin/Genesis-exe .
COPY entrypoint.sh .
## Prepare smbnetfs configuration files and create runtime user
ARG MOUNT_DIR=/home/moduleuser/mnt
ARG SMB_CONFIG_DIR=/home/moduleuser/.smb
RUN useradd -ms /bin/bash moduleuser && mkdir ${SMB_CONFIG_DIR}
# Set file permission so, that smbnetfs.auth and smbnetfs.host can be created later
RUN chmod -R 700 ${SMB_CONFIG_DIR} && chown -R moduleuser ${SMB_CONFIG_DIR}
# Copy smbnetfs.conf and restrict file permissions
COPY smbnetfs.conf ${SMB_CONFIG_DIR}/smbnetfs.conf
RUN chmod 600 ${SMB_CONFIG_DIR}/smbnetfs.conf && chown moduleuser ${SMB_CONFIG_DIR}/smbnetfs.conf
# Create module user and create mount directory
USER moduleuser
RUN mkdir ${MOUNT_DIR}
ENTRYPOINT ["./entrypoint.sh"]
Hint: The problem is not related to Docker, because I've the same problem within Ubuntu22.04.
Updates:
Update 1:
If I start smbnetfs in debug mode and run the command echo "demo-content" >> <mount-dir>/file.txt the following log is written:
open flags: 0x8401 /<windows-server-fqdn>/share/sub-dir/file.txt
2022-07-25 07:36:32.393 srv(26)->smb_conn_srv_open: errno=6, No such device or address
2022-07-25 07:36:34.806 srv(27)->smb_conn_srv_open: errno=6, No such device or address
2022-07-25 07:36:37.229 srv(28)->smb_conn_srv_open: errno=6, No such device or address
unique: 12, error: -5 (Input/output error), outsize: 16
Update 2:
If I use a Linux based smb-server, then I can write the files properly with the command echo "demo-content" >> <mount-dir>/file.txt
SMB-Server's Dockerfile
FROM alpine:3.7#sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b
RUN apk add --no-cache --update \
samba-common-tools=4.7.6-r3 \
samba-client=4.7.6-r3 \
samba-server=4.7.6-r3
RUN mkdir -p /Shared && \
chmod 777 /Shared
COPY ./conf/smb.conf /etc/samba/smb.conf
EXPOSE 445/tcp
CMD ["smbd", "--foreground", "--log-stdout", "--no-process-group"]
SMB-Server's smb.conf
[global]
map to guest = Bad User
log file = /var/log/samba/%m
log level = 2
[guest]
public = yes
path = /Shared/
read only = no
guest ok = yes
Update 3:
It also works:
if I create the file locally in the container and then move it to the <mount-dir>.
if I remove a file, that I created earlier (rm <mount-dir>/file.txt)
if I rename a file, that I created earlier.(mv <mount-dir>/file.txt <mount-dir>/fileMv.txt)
Update 4:
Found identical problem description here.

Docker, why the user and group are different?

I created a Dockerfile in the following
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
ENV CUDA_PATH /usr/local/cuda
ENV CUDA_INCLUDE_PATH /usr/local/cuda/include
ENV CUDA_LIBRARY_PATH /usr/local/cuda/lib64
RUN apt update -yq
RUN apt install -yq curl wget unzip git vim cmake zlib1g-dev g++ gcc sudo build-essential libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev openssh-server
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /.cache/pip
RUN mkdir -p /.local/share
RUN mkdir -p /.local/lib
RUN mkdir -p /.local/bin
RUN chown -R docker:docker /.cache/pip
RUN chown -R docker:docker /.local
RUN chown -R docker:docker /.local/lib
RUN chown -R docker:docker /.local/bin
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
RUN ln -s /lib/x86_64-linux-gnu/libc.so.6 /lib64/libc.so.6
RUN ln -s /lib/x86_64-linux-gnu/libc.so.6 /lib/libc.so.6
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
USER docker
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.10.3-Linux-x86_64.sh && \
bash Miniconda3-py37_4.10.3-Linux-x86_64.sh -b && rm Miniconda3-py37_4.10.3-Linux-x86_64.sh
ENV PATH /home/docker/.local/bin:$PATH
ENV PATH /home/docker/miniconda3/bin:$PATH
ENV which python3.7
RUN mkdir -p /home/docker/.local/
RUN chown -R docker:docker /home/docker/.local/
RUN chmod -R 777 /home/docker/.local/
RUN chmod -R 777 /.local/lib
RUN chmod -R 777 /.local/bin
RUN chmod -R 777 /.cache/pip/
RUN python3.7 -m pip install pip -U
RUN python3.7 -m pip install tensorflow-gpu==2.5.0 ray[rllib] gym[atari] torch==1.7.1 torchvision==0.8.2 scikit_learn==0.23.1 sacred==0.8.1 PyYAML==5.4.1 tensorboard_logger
# ENV PYTHONPATH "${PYTHONPATH}:/home/docker/.local/lib/python3.7/site-packages/"
RUN sudo ln -s $(which python3.7) /usr/bin/python
RUN ls $(python3.7 -c "import site; print(site.getsitepackages()[0])")
RUN python3.7 -m pip list
RUN python3.7 -m pip uninstall -y enum34
USER docker
RUN mkdir -p /home/docker/app
RUN chown -R docker:docker /home/docker/app
WORKDIR /home/docker/app
Then I built an image. After that, I run with this image.
NV_GPU=1 nvidia-docker run -i \
--name $name \
--user docker \
-v `pwd`:/home/docker/app \
-t MyImage:1.0 \
${#:2}
I used the user docker defined in the Dockerfile and mount current files to the workdir. However, it shows the docker user had no permission to create any files
PermissionError: [Errno 13] Permission denied
And the file in /home/docker/app
docker#109c5e6b269a:~/app$ ls -l
total 64
-rw-rw-r-- 1 1002 1003 11342 Oct 13 12:50 LICENSE
-rw-rw-r-- 1 1002 1003 4831 Oct 14 05:49 README.md
drwxrwxr-x 3 1002 1003 4096 Oct 14 08:12 docker
-rwxrw-r-- 1 1002 1003 225 Oct 14 08:36 run_train.sh
drwxrwxr-x 11 1002 1003 4096 Oct 14 03:46 src
drwxrwxr-x 4 1002 1003 4096 Oct 13 12:50 third-party
It shows the user and group are not docker. I tried to change owner to docker but some error occurred in my local file system.
How can I address this PermissionError issue?
Thank you.
You are mapping some directory (pwd) to a volume. The problem is that your local directory belongs to a user with UID=1002, but inside the container the user docker maps to a different UID (probably 1000).
One easy solution is to edit the Dockerfile to specify the UID when creating the user, so it matches your local directory.
If you want your image to be used by others, one good solution is to create an entry point script to modify the user's UID at container creation time, based on environment variable.

docker RUN mkdir does not work when folder exist in prev image

the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.

Docker volume mapping not working

I'm working from the Dockerizing a Node.js web app example, trying to understand Docker from first principles. I've uploaded it to repl.it with server.js renamed to index.js (due to a bug/feature where repl.it forces the existence of index.js), here are the links:
Project: https://repl.it/repls/BurlyAncientTrust
Live demo: https://BurlyAncientTrust--five-nine.repl.co
Download: https://repl.it/repls/BurlyAncientTrust.zip
I've also put together some core operations that derive container(s) from images in a functional/declarative manner rather than using names (surprisingly there's no central source for these):
# commands to start, list, login and remove containers/images associated with current directory's image
# build and run docker image (if it was removed with "docker rmi -f <image>" then this restores IMAGE in "docker ps")
(image=<image_name> && docker build -t $image . && docker run --rm -p <host_port>:<container_port> -d $image)
# list container id for image name
(image=<image_name> && docker ps -a -q --filter=ancestor=$image)
(image=<image_name> && docker ps -a | awk '{print $1,$2}' | grep -w $image | awk '{print $1}')
# run/exec bash inside container (similar to "vagrant ssh")
(image=<image_name> && docker exec -it $(docker ps -a -q -n=1 --filter=ancestor=$image) bash)
# remove containers for image name
(image=<image_name> && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f)
# remove containers and specified image
(image=<image_name> && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
To build and run the example:
Download and unzip BurlyAncientTrust.zip
cd <path_to>/BurlyAncientTrust
Then:
(image=node-web-app && docker build -t $image . && docker run --rm -p 49160:8080 -d $image)
Visit:
http://localhost:49160/
You should see:
Hello world
The problem is that I can't get the -v option for volume mapping (directory sync) working:
(image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
(image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -p 49160:8080 -d $image)
I see:
This site can’t be reached
And docker ps no longer shows the container. I'm on Mac OS X High Sierra so the "$(pwd)" portion may differ on other platforms. You can just substitute that with the absolute path of your current working directory. Here's the full output:
Zacks-Macbook:hello-system zackmorris$ (image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
Untagged: node-web-app:latest
Deleted: sha256:117288d6b7424798766b288518e741725f8a6cba657d51cd5f3157ff5cc9b784
Deleted: sha256:e2fb2f92c1fd4697c1d217957dc048583a14ebc4ebfc73ef36e54cddc0eefe06
Deleted: sha256:d274f86b6093a8e44afe1720040403e3fb5793f5fe6b9f0cf2c12c42ae6476aa
Deleted: sha256:9116e43368aba02f06eb1751e6912e4063326ce93ca1724fead8a8c1e1c6c56b
Deleted: sha256:902d4d1718530f6c7a50dd11331ee9ea85a04464557d699377115625da571b61
Deleted: sha256:261c92dc9ba95e2447e0250ea435717c855c6b184184fa050fc15fc78b1447f8
Deleted: sha256:559b16060e30ea3875772aae28a2c47508dfebda35529e87e7ff46f035669798
Deleted: sha256:4316607ec7e64e54ad59c3e46288a9fb03d9ec149b428a8f70862da3daeed4e5
Zacks-Macbook:hello-system zackmorris$ (image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -p 49160:8080 -d $image)
Sending build context to Docker daemon 57.34kB
Step 1/7 : FROM node:carbon
---> baf6417c4cac
Step 2/7 : WORKDIR /usr/src/app
---> Using cache
---> 00b2b9912592
Step 3/7 : COPY package*.json ./
---> f39ed074815e
Step 4/7 : RUN npm install
---> Running in b1d9bf79d502
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN docker_web_app#1.0.0 No repository field.
npm WARN docker_web_app#1.0.0 No license field.
added 50 packages in 4.449s
Removing intermediate container b1d9bf79d502
---> cf2a5fce981c
Step 5/7 : COPY . .
---> 46d46102772b
Step 6/7 : EXPOSE 8080
---> Running in cd92fbacacf1
Removing intermediate container cd92fbacacf1
---> ac13f4eda9a2
Step 7/7 : CMD [ "npm", "start" ]
---> Running in b6cd6811b0ce
Removing intermediate container b6cd6811b0ce
---> 06f887984da8
Successfully built 06f887984da8
Successfully tagged node-web-app:latest
effc653267558c80fbcf017d4c10db3e46a7c944997c7e5a5fe5d8682c5c9dad
Docker file sharing:
$ pwd
/Users/zackmorris/Desktop/hello-system
I know that something as mission critical as volume mapping has to work.
UPDATE: I opened an issue for this, and it's looking like it may not be possible (it could be a bug/feature from the early history of Docker). The best answer so far is that the Dockerfile calls RUN npm install before "$(pwd)" is mounted at /usr/src/app (which replaces the contents) so the /usr/src/app/node_modules directory gets replaced with nothing, which causes Node.js to crash because it can't find the express module, which causes the container to quit.
So I'm looking for an answer that works around this and makes this directory mapping possible in a general sense, without any weird gotchas like having to rearrange the contents of the image.
I dug further into the distinction between Docker buildtime and runtime, specifically regarding Docker Compose, and stumbled onto this:
https://blog.codeship.com/using-docker-compose-for-nodejs-development/
He was able to make it work by mapping node_modules as an additional volume in his docker-compose.yml (note that my path is /usr/src/app and his is /usr/app/ so don't copypaste this):
volumes:
- .:/usr/app/
- /usr/app/node_modules
I'm thinking this works because it makes node_modules an overlayed volume, which preserves any files inside it rather than overwriting them.
I tried it as a raw Docker command -v /usr/src/app/node_modules and it worked! Here is a new standalone example that's identical to BurlyAncientTrust but has a node_modules directory added:
Project: https://repl.it/repls/RoundedImpishStructures
Live demo: https://RoundedImpishStructures--five-nine.repl.co
Download: https://repl.it/repls/RoundedImpishStructures.zip
To build and run the example:
Download and unzip RoundedImpishStructures.zip then:
cd <path_to>/RoundedImpishStructures
Remove the old container and image if you were using them:
(image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
Run the new example:
(image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -v /usr/src/app/node_modules -p 49160:8080 -d $image)
You should see:
Hello world
Please don't upvote this answer, as I don't believe it to be a general solution. Hopefully it helps someone though.

How to install node binary distribution files on Linux

My production server (Centos 5.9) won't compile nodejs, possibly because it's gcc is only 4.1.2 (4.2 or above is recommended) so I've trying to install the binaries.
$ wget http://nodejs.org/dist/v0.10.22/node-v0.10.22-linux-x86.tar.gz
$ tar -zxvf node-v0.10.22-linux-x86.tar.gz
$ cd node-v0.10.22-linux-x86
$ sudo cp bin/* /usr/local/bin
$ sudo cp -R lib/* /usr/local/lib
$ sudo cp -R share/* /usr/local/share
And now for testing:
$ node -v # => v0.10.22
$ man node # looks fine
$ npm -v # UH OH, PROBLEM - Cannot find module 'npmlog'
Now (keeping in mind I'm a complete beginner at node) I did some searching and found there's an environment variable called NODE_PATH, so I tried:
$ export NODE_PATH=/usr/local/lib/node_modules
$ npm -v # SAME PROBLEM - Cannot find module 'npmlog'
So then I found out where npmlog lives and tried modifying NODE_PATH accordingly:
$ find /usr/local/lib -name npmlog # => /usr/local/lib/node_modules/npm/node_modules/npmlog
$ export NODE_PATH=/usr/local/lib/node_modules/npm/node_modules
$ npm -v # DIFFERENT PROBLEM - Can't find '../lib/npm.js'
At this stage, after more unhelpful googling, I decided I was in over my depth and decided to ask for help. Can anyone tell me what I'm doing wrong?
It is much faster to do clean NPM reinstall which will remove "broken" links:
wget https://npmjs.org/install.sh
chmod +x install.sh
sudo ./install.sh
Then it will ask you to remove old NPM link
Using Node Version Manager
Use a Node version manager like nvm to handle installation and version management for you. After you install nvm you can simply install any Node version, for example nvm install 8.
But if you just want to install the binary yourself, see below:
Using apt-get
In special cases where you need a system wide Node installation, you can use apt-get:
curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
sudo apt-get install -y nodejs
The above snippet will install the latest Node 8.
Installing the Binary Manually
In order to install the binary manually, all you need to do is to download the binary and create a bunch of symbolic links. Execute the commands below one after the other, and it should do the job. I have also written a shell script that does it for you if that is easier (see the bottom of the answer). Hope that helps.
Make sure to use the correct download link for your OS architecture (i.e. either 32-bit or 64-bit) for wget on the second line.
ME=$(whoami) ; sudo chown -R $ME /usr/local && cd /usr/local/bin #adding yourself to the group to access /usr/local/bin
mkdir _node && cd $_ && wget https://nodejs.org/dist/v8.11.4/node-v8.11.4-linux-x64.tar.xz -O - | tar zxf - --strip-components=1
ln -s "/usr/local/bin/_node/bin/node" .. # Making the symbolic link to node
ln -s "/usr/local/bin/_node/lib/node_modules/npm/bin/npm-cli.js" ../npm ## making the symbolic link to npm
Here is a shell script that downloads and installs all the components. If you use this script to install Node, you can use the uninstall script to uninstall it.
Installing Node
#! /bin/bash
# run it by: bash install-node.sh
read -p " which version of Node do you need to install: for example 8.11.4 (or any other valid version): " VERSIONNAME
read -p " Are you using a 32-bit or 64-bit operating system ? Enter 64 or 32: " ARCHVALUE
if [[ $ARCHVALUE = 32 ]]
then
printf "user put in 32 \n"
ARCHVALUE=86
URL=https://nodejs.org/dist/v${VERSIONNAME}/node-v${VERSIONNAME}-linux-x${ARCHVALUE}.tar.gz
elif [[ $ARCHVALUE = 64 ]]
then
printf "user put in 64 \n"
ARCHVALUE=64
URL=https://nodejs.org/dist/v${VERSIONNAME}/node-v${VERSIONNAME}-linux-x${ARCHVALUE}.tar.gz
else
printf "invalid input expted either 32 or 64 as input, quitting ... \n"
exit
fi
# setting up the folders and the the symbolic links
printf $URL"\n"
ME=$(whoami) ; sudo chown -R $ME /usr/local && cd /usr/local/bin #adding yourself to the group to access /usr/local/bin
mkdir _node && cd $_ && wget $URL -O - | tar zxf - --strip-components=1 # downloads and unzips the content to _node
cp -r ./lib/node_modules/ /usr/local/lib/ # copy the node modules folder to the /lib/ folder
cp -r ./include/node /usr/local/include/ # copy the /include/node folder to /usr/local/include folder
mkdir /usr/local/man/man1 # create the man folder
cp ./share/man/man1/node.1 /usr/local/man/man1/ # copy the man file
cp bin/node /usr/local/bin/ # copy node to the bin folder
ln -s "/usr/local/lib/node_modules/npm/bin/npm-cli.js" ../npm ## making the symbolic link to npm
# print the version of node and npm
node -v
npm -v
Uninstalling Node
#! /bin/bash
# run it by: ./uninstall-node.sh
sudo rm -rf /usr/local/bin/npm
sudo rm -rf /usr/local/bin/node
sudo rm -rf /usr/local/lib/node_modules/
sudo rm -rf /usr/local/include/node/
sudo rm -rf /usr/local/share/man/man1/node.1
sudo rm -rf /usr/local/bin/_node/
I had a problem like that, but with iojs. However it should be the same procedure:
(Assuming that you've got a file matching node-v*-linux-x64.tar.gz in your current directory):
# In case of iojs you need to replace the occurrences of 'node' with 'iojs'
# Extract the downloaded archive with the linux-x64 version of node
tar zxf node-v*-linux-x64.tar.gz
# Move the extracted folder (./node-v*-linux-x64/) to /opt/
mv ./node-v*-linux-x64/ /opt/
To make the binary files available in your shell, create some softlinks inside the /usr/bin/ directory:
# Create a softlink to node in /usr/bin/
ln -s /opt/node-v*-linux-x64/bin/node /usr/bin/node
# Create a softlink to npm in /usr/bin/
ln -s /opt/node-v*-linux-x64/bin/npm /usr/bin/npm
# Create a softlink to iojs in /usr/bin (this step can be omitted if you're using node)
ln -s /opt/node-v*-linux-x64/bin/iojs /usr/bin/iojs
Notice: If you'd like to access the cli of some globally installed node modules (for example bower, typescript or coffee-script), you're required to create a softlink to each of those executables in the /usr/bin/ directory.
Alternatively you could just add the bin directory of your node installation directory (e.g. /opt/node-v*-linux-x64/) to the PATH environment variable: (you should use the absolute path for this!)
# create a new .sh script in /etc/profile.d which adds the directory to PATH
echo "export PATH=$PATH:/opt/node-v0.12.3-linux-x64/bin" > /etc/profile.d/node-npm.sh
This change will take effect after logging out and in again.
Both methods worked for me (I use a linux desktop version of Ubuntu 14.04/15.04 with GNOME 3).
I had the same issue reported here. Fixed it by removing /usr/local/bin/npm and replacing it with a symlink to /usr/local/lib/node_modules/npm/bin/npm-cli.js
$ ls -l /usr/local/bin/
node
npm -> /usr/local/lib/node_modules/npm/bin/npm-cli.js
$ npm -v
1.3.17
wget <node archive url from nodejs.org>
cd /usr/local
sudo tar --strip-components 1 -xf <path to node archive>
You can run node and npm right away.
It used to be documented in the README inside the archive in older versions.
I had the same problem and I was able to resolve it by creating symlinks instead of copying the binaries.
$ cd /usr/local/src
$ wget http://nodejs.org/dist/v0.10.24/node-v0.10.24-linux-x64.tar.gz
$ tar -zxvf node-v0.10.24-linux-x64.tar.gz
$ cd node-v0.10.24-linux-x64
$ sudo cp -R lib/* /usr/local/lib
$ sudo cp -R share/* /usr/local/share
$ ln -s /usr/local/src/node-v0.10.24-linux-x64/bin/node /usr/local/bin/node
$ ln -s /usr/local/src/node-v0.10.24-linux-x64/bin/npm /usr/local/bin/npm
$ node -v
v0.10.24
$ npm -v
1.3.21
I tend to use nave to install the binaries. Use wget to download the nave.sh file and then us it to install node. Nave is also nice to have around in case one of your production apps requires a different version of node than what's installed globally.
$ wget https://raw.github.com/isaacs/nave/master/nave.sh
$ sudo bash nave.sh usemain 0.10.22
You can use GNU stow to make symbolic links of those binaries in /usr/local properly with one command. Stow also allows you to easily remove Node js from /usr/local at a later time and swap multiple versions of Node js.
$ # first, install stow
$ mkdir /usr/local/stow # if it doesn't exist
$ # then, place software binary package in /usr/local/stow
$ cd /usr/local/stow
$ stow <package_name> # install / add sym links
$ source $HOME/.bash_profile # reload your environment
$ # node -v and npm -v should now work
$ stow -D <package_name> # uninstall / remove sym links
These steps worked for me with node-v0.10.17-linux-x64.
In the man page of cp in Mac OS X:
Symbolic links are always followed unless the -R flag is set, in which case symbolic links are not followed, by default.
When you execute sudo cp bin/* /usr/local/bin, the symbolic link bin/npm is followed.
Actually, bin/npm is linked to ../lib/node_modules/npm/bin/npm-cli.js, so cp will copy npm-cli.js to /usr/local/bin. That's why you get an error.
I had the same problem.
The problem is the npm excutable in /usr/local/bin.
The way I solved it was:
sudo rm /usr/local/bin/npm
sudo ln -s "/usr/local/lib/node_modules/npm/bin/npm-cli.js" /usr/local/bin/npm
In Ubuntu there is a .bashrc file which sets path to binaries.
By default, there is path set for bin in home directory. Perhaps you can create bin directory in your home directory and move the binaries there. Reboot your system and try executing the command node
I faced the same problem. So, I symlinked node and npm from ./bin/ to /usr/local/bin
If someone is interested in using Docker, in the Dockerfile,
ENV NODE_VERSION 8.10.0
RUN wget https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz
RUN tar -xJvf node-v$NODE_VERSION-linux-x64.tar.xz -C /usr/local/
ENV NODEJS_HOME /usr/local/node-v$NODE_VERSION-linux-x64
ENV PATH $NODEJS_HOME/bin:$PATH
RUN node --version
RUN npm --version

Resources