Access forbidden to website after scp transfer - linux

I used scp2 to tranfer a folder from windows to ubuntu.
I executed the scp2 process as part of a gulp execution.
My project was successfuly transfered to the server but when I tried to navigate to the site from the browser I encountered a 403 Forbidden message.
The problem is that the scp2 process didn't grant permissions to the newly created folder and files.
When I execute the following lines on the server it's work fine:
find ProjFolder -type d -exec chmod 755 {} \;
find ProjFolder -type f -exec chmod 644 {} \;
My question is: how can I transfer my project from my local machine to the server without the need to repeatedly write the permission orders?

To preserve permissions try to use rsync, it has a lot more benefits besides keeping ownership, permissions and incremental copies:
rsync -av source 192.0.2.1:/dest/ination
EDIT [according to comments]:
This works well for transferring between 2 Linux systems but doesn't seem to work for Windows -> Linux transfer. Apparently PuTTY seems to work best for transfers involving Windows on one side and Linux on another

Related

Copy files within multiple directories to one directory

We have an Ubuntu Server that is only accessed via terminal, and users transfer files to directories within 1 parent directory (i.e. /storage/DiskA/userA/doc1.doc /storage/DiskA/userB/doc1.doc). I need to copy all the specific files within the user folders to another dir, and I'm trying to specifically target the .doc extension.
I've tried running the following:
cp -R /storage/diskA/*.doc /storage/diskB/monthly_report/
However, it keeps telling me there is no such file/dir.
I want to be able to just pull the .doc files from all the user dirs and transfer to that dir, /storage/monthly_report/.
I know this is an easy task, but apparently, I'm just daft enough to not be able to figure this out. Any assistance would be wonderful.
EDIT: I updated the original to show that I have 2 Disks. Moving from Disk A to Disk B.
I would go for find -exec for such a task, something like:
find /storage/DiskA -name "*.doc" -exec cp {} /storage/DiskB/monthly_report/ \;
That should do the trick.
Use
rsync -zarv --include="*/" --include="*.doc" --exclude="*" /storage/diskA/*.doc /storage/diskB/monthly_report/

403 Access denied on My Single Page

I have looked at the various questions related to my question but haven't found any working answers
I have created a website,
All other directories are working perfectly.
But when I access this page: http://www.pakdostana.paks.pk/private-chat it says
403 access denied
All other pages are working,
My file permissions are 644 and I also try changing them to 755 and 777 but the error is same.
And I am using my own routing system.
Updated
I have an update when I directly access the folder with this URL: http://www.pakdostana.paks.pk/Chat/Private-Chat.php it works, but not with PHP routing system what can be error any guess!
Can anyone help me!
Best regards
It's not only the files permissions. You need permissions for the folder containing these files. The recommended permissions are 664 for files and 775 for folders.
As you don't specify which http server you are using nor the OS, I'm going to put an example Apache on linux.
cd /var/www/
sudo chgrp -R www-data html
find . -type f -exec chmod 664 {} \;
find . -type d -exec chmod 775 {} \;
What this does is:
Put you on the parent directory of default Apache's base path.
Change group ownership recursively for www-data and to html directory.
Find all files recursively and change permission to 664 (you may need to execute this with sudo).
Find all directories recursively and change permission to 775 (you may need tho execute this with sudo).
Hope this helps.
I want to tell everyone, every time I use Word Chat it says 403 access denied but when I removed this word it is working perfectly.
I don't know what was the error with this word

Plesk panel Cronjob delete folder older then x days

i'm trying to set a cronjob on plesk panel to remove folders in a directory /uploads/temp_files.
I'm using this command:
find /uploads/temp_files/* -type d -ctime +30 -exec rm -rf {} \;
but i get an error from plesk: -: find: command not found
what can i do?
Thanks!
You should use the full path. Instead of find use /bin/find. Depending of your linux distro, the location might be differnt. On a ssh shell console use this:
which find
The output will show you the exact location of find. Then use that full path in your cron job!
Because of security reasons hosting providers use chrooted shell.
In case your subscription has chrooted shell you have limited access to server commands and there is no find binary in Plesk default chrooted shell.
You can check it by following path "/var/www/vhosts/example.com/bin/" in "File Manager":
In this case you may ask your hosting provider to add find in your subscription or in common chroot template by following KB: https://support.plesk.com/hc/en-us/articles/213909545--HOWTO-How-to-add-new-programs-to-a-chrooted-shell-environment-template

When to use -type d-exec chmod 750 / 640 in docker

I've started to work with Docker for local development moving from installing everything on my mac to containers. Looking through a number of projects I regularly see the following shell commands, particularly
find /www -type d -exec chmod 750 {} \; \
find /www -type f -exec chmod 640 {} \;
Firstly, what are they trying to achieve, secondly what do the commands actually mean and lastly why/ when would you want or need to use this?
I recently duplicated and modified another project and found pulling these commands out seemed to make no difference (fair enough it was no longer based on the same base container.... but still).
Any glimmer of enlightenment would be greatly appreciated.
EDITS:
That handy link in the comments below to explain shell tells us:
What: find all the folders in /www and execute the chmod command, changing the permissions to 750
- still unsure of 750, and more importantly why you would do this.
The commands sets all files and directories to be readable and writable by the owner, and readable by the group but the files can not be executed by anyone.
You might want to read up on unix permissions in a bit more detail first.
find /www -type f -exec chmod 640 {} \;
Find all files under /www and set the user to have read, write access (6) and the group to have read access (4). Other users have no access (0).
find /www -type d -exec chmod 750 {} \;
Find all directories under /www and set the user to have read, write and execute permissions (7) and the group to have read and execute permissions (5) to those directories. Other users have no permissions (0).
The \; after each find terminates the -exec command and must be escaped when run in a shell so it is not interpreted as a regular ; which is the end of the shell command. This can also be achieved with a + which is easier to read as it doesn't need to be escaped and more efficient. The efficiency can cause differences in output, if you are relying on the stdout/stderr somewhere else.
Execute permissions on a directory mean that a user can change into the directory and access the files inside. As a directory can't be executed in the sense of a executable file, the execute bit was overloaded to mean something else.
The link Cyrus posted to explainshell.com is an excellent tool as well.

Automatically cleanup tmp directory in Unix

I am implementing my own file cache server using Play framework, and I put my cached files in /tmp directory.
However I do not know how the OS manages the /tmp directory. What I wish to know is if the OS will automatically cleanup some files that are old enough, or have not been accessed for a long time.
I am running my server in a Docker container, based on Debian jessie.
Your OS won't clean up /tmp. Some Unix variants clear it out at reboot. You will need to do this yourself.
find /tmp/yourpath -mtime +30 -type f -exec rm {} \;
For example.
But Docker is a bit of a special case, as the containers are an encapsulation layer. That find will still do the trick, but you could probably just dump and restart your container 'fresh' and trash the old one.

Resources