cp: target '/root/var/www/html/' is not a directory - linux

I am using Ubuntu windows 10 bash and I'd like to move a project from /mnt/i/Projects/Template to run it on Apache server which located in /var/www/html.
I tried to copy a folder from a direct to new directly but unfortunately I got an error which is:
cp -r /mnt/i/Projects/Template ~/var/www/html/
cp: target '/root/var/www/html/' is not a directory
I would like to test those templates with Apache and I tried to change Apache directly.
Another test I did:
root#DESKTOP-4PBGG1N:/var/www# ls -ld ~/var ~/var/www ~/var/www/html
ls: cannot access '/root/var': No such file or directory
ls: cannot access '/root/var/www': No such file or directory
ls: cannot access '/root/var/www/html': No such file or directory

First of all the directory for the apache server is not in root it's just "/var/www/html". If it still doesn't work you probably doesn't have apache installed, you can do that by running these two lines "lsb_release -a" and "sudo apt-get install apache2". There will come an error when trying to launch the apache server (with "sudo service apache2 start"), but just ignore it you can still use it without any problems. Hope it helps ;)

try creating directory if the only problem is '/root/var/www/html/' not being a directory
# mkdir -pv ~/var/www/html/
# cp -r /mnt/i/Projects/Template ~/var/www/html/
before that just make sure that apache is installed and configured
have a nice day

For instance you have a file in Documents called index.php and to be copied in the /root/var/www/html/ directory you have to do it this way:
First don't forget to use sudo to be super user and then
- sudo cp -Rv index.php /var/www/html or
- sudo cp -Rv index.php /root/var/www/html
And you will get this output: 'index.php' -> '/var/www/html/index.php'
-R for copy folders &
-v for see what folders and files are copied

Related

How to give permissions for specific commands in linux

I am new to linux. I have a build.sh file which consists of a lot of mkdir commands and some rm commands. But as I have installed this new in my VB, each time I run the .sh file, it says "Permission Denied for creating directory" and fails.
So is there any way that I grant directory privileges to all users.
Can anyone help me with this
Add "sudo" in the beginning of the directory creation command i.e
sudo mkdir dir_name
The issue might be with the directory in which the mkdir command is being run.
Use the command ll or ls -l to check the directory permissions.
If your directory doesn't have write privilege for the current user, you can run
chmod -R u+w /path/to/directory
This might require you to use sudo if permission is denied.
If you want to enable it for all users, run
chmod -R ugo+w /path/to/directory
Alternatively, a quick fix would be to run the build.sh file as root
sudo /path/to/build.sh
However, this approach is not advised unless you always run it as root

Proper use of '/opt' folder on linux

Linux System: Ubuntu 14.04 LTS
I copy some app (like xxx) to the /opt folder to be used also by another user-accounts. Then to start it I use:
sudo /opt/xxx_folder/xxx
(of course, links to /usr/local/bin or /usr/bin, etc.) to start it;
Problem: I'm storing the results/projects of the app to my local folder ( like /home/myuser/xxx_data). And of course the folder and it's data xxx_data belongs to root (not myuser). So I have to change the owner every time I want to edit those files using another app not as a root.
Question: is there a way to install an app xxx to /opt so, that I don't need to start them as a root?
OR maybe you see another way to solve this 'root-user-problem?'
You can add execute permission to any file like this.
sudo chmod +x file.sh
If you want to do that for all files in that folder try this:
sudo chmod +x /opt/*
Note the +x just adds execute permission to your logged in user. I think all users have read (+r) by default so if you also want to add write permission:
sudo chmod +xw /opt/*
Personally I keep all my custom scripts in a bin folder e.g. /opt/bin/ and just do:
sudo chmod +x /opt/bin/*
To run the script without the full path add the bin or full opt folder to your path by adding the following to ~/bashrc file:
PATH=$PATH:/opt/bin
If you don't end up using the bin folder, edit above to be /opt instead of /opt/bin.

Enable write permission for directory in Linux

I keep trying to move files from a directory on Linux- but I keep getting permission errors.
Initially I was told
sudo chmod -R r+w /directory/*
But this only applies it to the directory folder (and not the files inside)
Trick is- you need to "select all" to apply the file permissions to:
sudo chmod -R a+rwx,go-w /directory/
And that's it
Or you could do sudo chmod 777 /dir/
and that's just a simple way to do the answer stated above.

How to downlaod a file in a directory using wget command in bash script?

I want to download a list of files using "wget" command of linux in a bash script file. The problem is that when I am trying to change the directory to another subdirectory in my home, it does not work and the wget after the cd command will download the files in my home directory not the desired subdirectory
mkdir -m 777 "dbback2012"
cd "dbback2012"
wget -r [FTP URL]
The problem is that the downloaded files via wget are in the home directory not the "dbback2012" directory.
There's nothing wrong with the code, you either
haven't shown us the real code
the script is executed somewhere else, check the working directory: pwd
the script failed to create the directory mkdir -m 777 "dbback2012" || (echo "ooops"; exit 1)

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

Resources