Running a nodejs file with "./" in terminal - node.js

How do I run a nodejs file using ./foo.js instead of node foo.js from the terminal? Running it with node works fine, but with the ./ I get bash: ./foo.js: Permission denied.
I'm new to Ubuntu, so I'm not sure if its an OS tweak.

Make sure the file is executable. You can check this by doing a "ls -la":
$ ls -la foo.js
-rw-r--r-- 1 daniel daniel 0 Oct 15 21:53 foo.js
The lack of an "x" means that it's not executable. To make it executable, use chmod +x:
$ chmod +x foo.js
$ ls -la foo.js
-rwxr-xr-x 1 daniel daniel 0 Oct 15 21:53 foo.js
Also make sure you have a "shebang" line at the very top of the file. This tells the shell what interpreter to use for the file:
#!/usr/bin/env node

Related

Bash scripts only run when using bash command

Today, my working bash scripts stopped working on my Debian 11 server.
$ ./CCrec.sh
-bash: ./CCrec.sh: Permission denied
Fails even if using sudo.
Yes, the permissions are set correctly. (They've been working for years.)
$ ls -l CCrec.sh
-rwxr--r-- 1 user1 user1 858 Jan 23 20:30 CCrec.sh
Another clue: autotab it doesn't recognize the script is present with beginning with "./". This makes me think the's a change in my bashrc, but I'm not noticing a change.
script will run when specifying bash:
$ bash CCrec.sh
This is working
Any solutions?

#!/usr/bin/env: No such file or directory

The shebang line in my bin/www file is:
pi:~/ferc$ head -n 1 bin/www
#!/usr/bin/env node
However, executing it:
pi:~/ferc$ bin/www
bin/www: line 1: #!/usr/bin/env: No such file or directory
The env file does exist:
pi:~/ferc$ ls -lL /usr/bin/env
-rwxr-xr-x 1 root root 31408 Feb 18 2016 /usr/bin/env
The node file also exists:
pi:~/ferc$ ls -al /usr/bin/node
lrwxrwxrwx 1 root root 15 Jul 7 18:29 /usr/bin/node -> /usr/bin/nodejs
And node runs fine:
pi:~/ferc$ node -v
v4.2.6
What does the error message really mean? Which file is it complaining about?
The cause was a corrupted file, probably due to a mixture of LF and CF/LF line endings in the file.
What happened were:
I copied the file from a Windows PC to the AWS ec2 Ubuntu instance.
First time I ran the www file, that same error message appeared. The cause at this point was probably the node executable did not exist. I hadn't created the symbolic link yet.
While trying to troubleshoot, I edited and saved the www file using nano. I think at this point the file got corrupted.
Later, I added the symbolic link for /usr/bin/node. However, the same error persisted, but probably due to the corrupted line endings.
I dos2unix the www file, and the error went away.
You can use node directly like:
#!/usr/bin/node
See pros and cons: https://unix.stackexchange.com/questions/29608/why-is-it-better-to-use-usr-bin-env-name-instead-of-path-to-name-as-my

Can't write on hard drive owned by user and with permissions to write, read and execute

I am very puzzled by this problem I am having. I am trying to execute a file in Ubuntu 12.04 LTS via command line. I have a script that calls a program to run and write the results in a hard drive. I changed the permissions and ownership of everything to be wxr. Here is the ls -l of my script (called TEST-star):
-rwxrwxrwx 1 root root 950 Nov 15 13:16 TEST-star
Here is the ls -l of the package my script calls:
-rwxrwxrwx 1 root root 1931414 Nov 10 12:37 STAR
Finally the ls -l of the hard drive mounted in /media/CLC"
drwxrwxrwx 1 root root 8192 Nov 15 13:04 CLC
I have been trying to run it since yesterday and always get a message that I don't have permission to write the results:
EXITING because of FATAL ERROR: could not create output file ./_STARtmp//Unmapped.out.mate1.thread14
Solution: check that you have permission to write this file
I thought if I change the permissions to rwx and run my script as root it would not have a problem (using sudo). Right now I run out of options. Any suggestion would be appreciated. Please let me know what other information you would need solve this issue.
Thank you.
Here is the first line of script I am trying to run:
#!/bin/sh
cd /media/CLC/ANOPHELES-STAR-v2.4f1/; mkdir GambFemAnt1 && cd GambFemAnt1; echo $PWD && echo Starting mapping of GambFemAnt1; /home/aedes/Documents/STAR_2.4.0f1/STAR --genomeDir /media/Galaxy/Galaxy_data/Anopheles/STAR/Genome --readFilesIn /media/Galaxy/Galaxy_data/Anopheles/QC/GambFemAnt1/GambFemAnt1.fastq --runThreadN 23 --outFilterMismatchNmax 4 --outFilterMatchNminOverLread 0.75 --seedSearchLmax 30 --seedSearchStartLmax 30 --seedPerReadNmax 100000 --seedPerWindowNmax 100 --alignTranscriptsPerReadNmax 100000 --alignTranscriptsPerWindowNmax 10000 --outSAMstrandField intronMotif --outFilterIntronMotifs RemoveNoncanonical --outSAMtype BAM SortedByCoordinate --outReadsUnmapped Fastx; mv Aligned.sortedByCoord.out.bam GambFemAnt1.bam; mv Unmapped.out.mate1 GambFemAnt1-unmapped.fastq; cp *.fastq /media/CLC/ANOPHELES-STAR-v2.4f1/UNMAPED-reads/; cd /media/CLC/ANOPHELES-STAR-v2.4f1 && echo $PWD && echo GambFemAnt1 mapping finished;
I also posted a question for the authors of the package.
Turns out all the permissions were set correctly. The problem resigns within the package. I found out that it works using --runThreadN 12 instead of --runThreadN 23.

Running app inside Docker as non-root user

After yesterday's news of Shocker, it seems like apps inside a Docker container should not be run as root. I tried to update my Dockerfile to create an app user however changing permissions on app files (while still root) doesn't seem to work. I'm guessing this is because some LXC permission is not being granted to the root user maybe?
Here's my Dockerfile:
# Node.js app Docker file
FROM dockerfile/nodejs
MAINTAINER Thom Nichols "thom#thomnichols.org"
RUN useradd -ms /bin/bash node
ADD . /data
# This next line doesn't seem to have any effect:
RUN chown -R node /data
ENV HOME /home/node
USER node
RUN cd /data && npm install
EXPOSE 8888
WORKDIR /data
CMD ["npm", "start"]
Pretty straightforward, but when I ls -l everything is still owned by root:
[ node#ed7ae33e76e1:/data {docker-nonroot-user} ]$ ls -l /data
total 64K
-rw-r--r-- 1 root root 383 Jun 18 20:32 Dockerfile
-rw-r--r-- 1 root root 862 Jun 18 16:23 Gruntfile.js
-rw-r--r-- 1 root root 1.2K Jun 18 15:48 README.md
drwxr-xr-x 4 root root 4.0K May 30 14:24 assets/
-rw-r--r-- 1 root root 416 Jun 3 14:22 bower.json
-rw-r--r-- 1 root root 930 May 30 01:50 config.js
drwxr-xr-x 4 root root 4.0K Jun 18 16:08 lib/
drwxr-xr-x 42 root root 4.0K Jun 18 16:04 node_modules/
-rw-r--r-- 1 root root 2.0K Jun 18 16:04 package.json
-rw-r--r-- 1 root root 118 May 30 18:35 server.js
drwxr-xr-x 3 root root 4.0K May 30 02:17 static/
drwxr-xr-x 3 root root 4.0K Jun 18 20:13 test/
drwxr-xr-x 3 root root 4.0K Jun 3 17:38 views/
My updated dockerfile works great thanks to #creak's clarification of how volumes work. Once the initial files are chowned, npm install is run as the non-root user. And thanks to a postinstall hook, npm runs bower install && grunt assets which takes care of the remaining install steps and avoids any need to npm install -g any node cli tools like bower, grunt or coffeescript.
Check this post: http://www.yegor256.com/2014/08/29/docker-non-root.html In rultor.com we run all builds in their own Docker containers. And every time before running the scripts inside the container, we switch to a non-root user. This is how:
adduser --disabled-password --gecos '' r
adduser r sudo
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
su -m r -c /home/r/script.sh
r is the user we're using.
Update 2015-09-28
I have noticed this post getting a bit of attention. A word of advice for anyone who is potentially interested in doing something like this. I would try to use Python or another language as a wrapper for your script executions. Doing native bash scripts I had problems when trying to pass through a variety of arguments to my containers. Specifically there was issues with the interpretation/escaping of " and ' characters by the shell.
I was needing to change the user for a slightly different reason.
I created a docker image housing a full featured install of ImageMagick and Ffmpeg with a desire that I could do transformations on images/videos within my host OS. My problem was that these are command line tools, so it is slightly trickier to execute them via docker and then get the results back into the host OS. I managed to allow for this by mounting a docker volume. This seemed to work okay except that the image/video output was coming out as being owned by root (i.e. the user the docker container was running as), rather than the user whom executed the command.
I looked at the approach that #François Zaninotto mentioned in his answer (you can see the full make script here). It was really cool, but I preferred the option of creating a bash shell script that I would then register on my path. I took some of the concepts from the Makefile approach (specifically the user/group creation) and then I created the shell script.
Here is an example of my dockermagick shell script:
#!/bin/bash
### VARIABLES
DOCKER_IMAGE='acleancoder/imagemagick-full:latest'
CONTAINER_USERNAME='dummy'
CONTAINER_GROUPNAME='dummy'
HOMEDIR='/home/'$CONTAINER_USERNAME
GROUP_ID=$(id -g)
USER_ID=$(id -u)
### FUNCTIONS
create_user_cmd()
{
echo \
groupadd -f -g $GROUP_ID $CONTAINER_GROUPNAME '&&' \
useradd -u $USER_ID -g $CONTAINER_GROUPNAME $CONTAINER_USERNAME '&&' \
mkdir --parent $HOMEDIR '&&' \
chown -R $CONTAINER_USERNAME:$CONTAINER_GROUPNAME $HOMEDIR
}
execute_as_cmd()
{
echo \
sudo -u $CONTAINER_USERNAME HOME=$HOMEDIR
}
full_container_cmd()
{
echo "'$(create_user_cmd) && $(execute_as_cmd) $#'"
}
### MAIN
eval docker run \
--rm=true \
-a stdout \
-v $(pwd):$HOMEDIR \
-w $HOMEDIR \
$DOCKER_IMAGE \
/bin/bash -ci $(full_container_cmd $#)
This script is bound to the 'acleancoder/imagemagick-full' image, but that can be changed by editing the variable at the top of the script.
What it basically does is:
Create a user id and group within the container to match the user who executes the script from the host OS.
Mounts the current working directory of the host OS (using docker volumes) into home directory for the user we create within the executing docker container.
Sets the tmp directory as the working directory for the container.
Passes any arguments that are passed to the script, which will then be executed by the '/bin/bash' of the executing docker container.
Now I am able to run the ImageMagick/Ffmpeg commands against files on my host OS. For example, say I want to convert an image MyImage.jpeg into a PNG file, I could now do the following:
$ cd ~/MyImages
$ ls
MyImage.jpeg
$ dockermagick convert MyImage.jpeg Foo.png
$ ls
Foo.png MyImage.jpeg
I have also attached to the 'stdout' so I could run the ImageMagick identify command to get info on an image on my host, for e.g.:
$ dockermagick identify MyImage.jpeg
MyImage.jpeg JPEG 640x426 640x426+0+0 8-bit DirectClass 78.6KB 0.000u 0:00.000
There are obvious dangers about mounting the current directory and allowing any arbitrary command definition to be passed along for execution. But there are also many ways to make the script more safe/secure. I am executing this in my own non-production personal environment, so these are not of highest concern for me. But I would highly recommend you take the dangers into consideration should you choose to expand upon this script. It's also worth me mentioning that this script doesn't take an OS X host into consideration. The make file that I steal ideas/concepts from does take this into account, so you could extend this script to do so.
Another limitation to note is that I can only refer to files currently in the path for which I am executing the script. This is because of the way I am mounting the volumes, so the following would not work:
$ cd ~/MyImages
$ ls
MyImage.jpeg
$ dockermagick convert ~/DifferentDirectory/AnotherImage.jpeg Foo.png
$ ls
MyImage.jpeg
It's best just to go to the directory containing the image and execute against it directly. Of course I am sure there are ways to get around this limitation too, but for me and my current needs, this will do.
This one is a bit tricky, it is actually due to the image you start from.
If you look at the source, you notice that /data/ is a volume. So everything you do in the Dockerfile will be discarded and overridden at runtime by the volume that gets mounted then.
You can chown at runtime by changing your CMD to something like CMD chown -R node /data && npm start.
Note: I answer here because, given the generic title, this Question pops up in google when you look for a solution to "Running app inside Docker as non-root user". Hope it helps those who are stranded here.
With Alpine Linux you can create a system user like this:
RUN adduser -D -H -S -s /bin/false -u 1000 myuser
Everything in the Dockerfile after this line is executed with myuser.
myuser user has:
no password assigned
no home dir
no login shell
no root access.
This is from adduser --help:
-h DIR Home directory
-g GECOS GECOS field
-s SHELL Login shell
-G GRP Add user to existing group
-S Create a system user
-D Don't assign a password
-H Don't create home directory
-u UID User id
-k SKEL Skeleton directory (/etc/skel)
Note: This answer is given because many people looking for non-root usage will end up here. Beware, this does not address the issue that caused the problem, but is addressing the title and clarification to an answer given by #yegor256, which uses a non-root user inside the container. This answer explains how to accomplish this for non-debian/non-ubuntu use-case. This is not addressing the issue with volumes.
On Red Hat-based systems, such as Fedora and CentOS, this can be done in the following way:
RUN adduser user && \
echo "user ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/user && \
chmod 0440 /etc/sudoers.d/user
In your Dockerfile you can run commands as this user by doing:
RUN su - user -c "echo Hello $HOME"
And the command can be run as:
CMD ["su","-","user","-c","/bin/bash"]
An example of this can be found here:
https://github.com/gbraad/docker-dev/commit/644c51002f4b8e6fe5bb745638542a4c3d908b16

command not found when runng bash script as user apache

I am trying to run a bash script as user apache. and it throws up the following
[apache#denison public]$ ll
total 32
drwxr-xr-x 2 apache apache 4096 Jul 17 08:14 css
-rw-r--r-- 1 apache apache 4820 Jul 17 10:04 h3111142_58_2012-07-17_16-03-58.php
-rwxrwxrwx 1 apache apache 95 Jul 17 10:04 h31111.bash
drwxr-xr-x 2 apache apache 4096 Jul 17 08:14 images
-rw-r--r-- 1 apache apache 754 Jul 17 08:13 index.php
drwxr-xr-x 2 apache apache 4096 Jul 17 08:14 javascript
drwxr-xr-x 5 apache apache 4096 Jul 17 08:14 jquery-ui-1.8.21.custom
[apache#denison public]$ bash h31111.bash
: command not found :
contents of the file are:
#!/bin/bash
/usr/bin/php /opt/eposdatatransfer/public/h3111142_58_2012-07-17_16-03-58.php
php script runs fine below are the results
[apache#denison public]$ /bin/bash h31111.bash
: command not found:
[apache#denison public]$ chmod +x h31111.bash
[apache#denison public]$ ./h31111.bash
./h31111.bash: Command not found.
[apache#denison public]$ php h3111142_58_2012-07-17_16-03-58.php
creation of file:
$batchFile = $this->session->username . "_" . $index . "_" . $date . ".sh";
$handle = fopen($batchFile, 'w');
$data = "#!/bin/bash
/usr/bin/php /opt/eposdatatransfer/public/$file
";
/*
rm -rf /opt/eposdatatransfer/public/$file
rm -rf /opt/eposdatatransfer/public/$batchFile*";*/
fwrite($handle, $data);
fclose($handle);
batchfile is the bash script and file is the php file. These get craeted automatically based on user input in webapp. my webapp runs on linux.
I'm guessing you uploaded the script from a windows machine, and didn't strip the carriage returns from the end of the lines. This causes the #! mechanism (the first line of most scripts) to fail, because it searches for #!/some/interpreter^M, which rarely exists.
You can probably strip the carriage returns, if you have them, using fromdos or:
tr -d '\015' < /path/to/script > /tmp/script; chmod 755 /tmp/script; mv /tmp/script /path/to/script
What happens if you try to run your script with
$ /bin/bash h31111.bash
Try this (assuming your script file is named "h31111.bash"):
$ chmod +x h31111.bash
then to run it
$ ./h31111.bash
Also are you sure you have the right path for your php command? What does which php report?
--
As #jordanm correctly suggests based on the output of the file command I suggested you run, you need to run the dos2unix command on your file. If you don't have that installed this tr -d '\r' will also work. I.e.,
$ tr -d '\r' h31111.bash > tmp.bash
$ mv tmp.bash h31111.bash
and you should be all set.
Under some versions of Ubuntu these utilities (e.g., dos2unix) don't come installed, for information on this see this page.
It looks to me the problem is your $PATH. Some users on the system will have . (the current directory) in their $PATH and others will not. If typing ./h31111.bash works, then that's your problem. You can either specify the file with a relative or absolute path, or you can add . to the $PATH of that user but never do that for root.
Since you're not sure where it's failing, let's try to find out.
First, can you execute the program?
./h31111.bash
That should be equivalent to invoking it with:
/bin/bash h31111.bash
If the above gives you the same error message than it's likely a problem with the script contents. To figure out where something's gone awry in a bash script, you can use set -x and set -v.
set -x will show you expansions
set -v will show you the lines before their read
So, you'd change your script contents to something like the following:
#!/bin/bash
set -x
set -v
/usr/bin/php /opt/eposdatatransfer/public/h3111142_58_2012-07-17_16-03-58.php
Another possibility, which you probably only learn by experience, is that the file is in MSDOS mode (i.e., has CR LF's instead of just LFs). This often happens if you're still using FTP (as opposed to SFTP) and in ASCII mode. You may be able to set your transfer mode to binary and have everything work successfully. If not, you have a few options. Any of the following lines should work:
dos2unix /path/to/your/file
sed -i 's/\r//g' /path/to/your/file
tr -d '\r' < /path/to/your/file > /path/to/temp/file && mv /path/to/temp/file /path/to/your/file
In Vim, you could do :set ff=unix and save the file to change the format as well.
Let's take a look at what you have in PHP:
$handle = fopen($batchFile, 'w');
$data = "#!/bin/bash
/usr/bin/php /opt/eposdatatransfer/public/$file
";
Since you have a multi-line string, the CR LF characters that are embedded will depend on whether your PHP file is in Windows or Unix format. You could switch that file to Windows format and you'd be fine. But that seems easy to break in the future, so I'd go with something a little less volatile:
$handle = fopen($batchFile, 'w');
fwrite($handle, "#!/bin/bash\n");
fwrite($handle, "/usr/bin/php /opt/eposdatatransfer/public/$file\n");
fclose($handle);

Resources