I am using FileZilla FTP to right click and change a directories File Permissions as I do on many other sites/servers. For some reason this is not working in Windows Azure. It outputs in FileZilla "500 'SITE CHMOD 777 (mydirectory)': command not understood"
Any ideas?
The Windows Azure portal has a "Console" for websites where you can execute some shell commands. One of them appears to be chmod (fileutils) 4.1. I was able to modify the permissions on a folder using this:
chmod -R 744 myfolder
I found a hack solution to delete files on Azure:
Stop your website from the management console (https://manage.windowsazure.com)
Open up the FTP site in Filezilla
Rename the directory that has the problem to anything else (Possibly an optional step, I dont know)
Delete the renamed directory
Restart your website.
That seems to do it.
Windows Azure Websites is a Windows Server based server. Thus, file permissions don't work like in Linux (as #SLaks already mentioned).
However, the account your scripts (PHP/ASP.NET/node.js) are executed under has full access to the folder /site/wwwroot, as does your FTP user. Meaning that from your PHP you can do all fully privileged file access operations - Read, Write, Delete, Create, Create directories.
What you cannot do, and cannot be changed, is to execute scripts (which 0777 would give you in Linux).
Related
I am running a default t2.nano ec2 linux ami. Nothing is changed on it. I am trying to rsync my local changes to the server. There is a permissions issue that I don't know enough about to fix.
My structure is as follows. I'm trying to push my work to the technology directory. The technology directory is mapped to a staging domain. i.e. technology.staging.com
:/var/www/html/technology
this is from the root, and it does work fine, it's the rsync that is failing.
when I push locally to that directory I get a "failed: Permission denied (13)" error.
I'm running an nginx server and assigned permissions to the www directory as follows:
sudo chown -R nginx:nginx /var/www
My user is ec2-user which is the normal default. Here is where I am tripped up. You can see the var directory is given root access.
You can see that the www directory then has permissions set to nginx so our server can access the files. I believe I need to add the ec2-user to this directory as well as the nginx user so that I can rsync my files there and the server will still have access I'm just unsure of how to do that.
As a test, I created a test directory at this location and it worked successfully.
:/home/ec2-user/test
you can see the permission here are set for the ec2-user which is why it works i'm sure.
Here's the command I'm running on my local machine to rsync my files which fails.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/var/www/html/technology
Here's the command that was working.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/home/ec2-user/test
I have done enough research and testing to know that it's a permissions error, I just can't figure out the right way to solve it. Do I need to create a group and assign both the nginx and ec2-user to the group and then give that group the same permissions level on the :/var directory.
Side note, what permissions level do I set for the chown to make these permissions that are currently set?
I have server config files in the :/etc/nginx/conf.d/ directory that map to the directories I create inside of :/var/www/html directory so I can have multiple sites hosted on the server.
So in this example, I have a config file at :/etc/nginx/conf.d/technology.conf which maps to the directory at :/var/www/html/technology
Thank you in advance, again, I do feel like I have put forth the research and effort to show that I've gone as far as I know how to do.
The answer made sense after I spent roughly a day playing around. You have to give access to both the ec2-user and the nginx group. I believe you never want to put a user in a group that involves the server itself, I think things would go south.
After changing the owner to both the ec2-user and nginx group, it still didn't work exactly the way I wanted it to. The reason was, I needed the nginx permissions to be updated to what they had when they were assigned the user role.
Basically, theec2-user had write permissions and the server did not. we wanted the user to have write permissions so they could rsync my local files to the directory on the server, and the nginx group needed the same level of permissions to display the pages. Now that I think about it, the nginx group may have only needed read permissions to display things, but this at least solved the problem for now.
Here is the command I ran on the server to update the ownership and the permissions, as well as the output.
modify ownership
sudo chown -R ec2-user:nginx :/var/www/html/technology
modify permissions
sudo chmod -R o=rwx,g+rwx,o-w technology
The end result looks like this
You can see the permissions match, and the ownership is as we expected. The only thing I have to figure out is after I rsync new files to the server, I need to run the previous code to update the permissions again. I'm sure that will come to me later, but I hope this helps anyone in the same situation.
I have a AWS ec2 instance with Amazon Linux AMI running. As the web server I installed Apache and the web directory is /var/www/html.
Until now I had the permission for /var/www/html set as 777 under the user c2-user (chmod -R 777 /var/www/html).
I read, that you should usually have set the 644 permission for web access. But as soon as I do that, I get the 403 forbidden error message. What do I have to change?
The difference between '7' and '6' is the execute bit. That's important on directories because it allows other users to enter the directory. Since the dir is opened by ec2-user and Apache runs as another user, the third number (of 777) comes into play.
On individual files it may be okay to use permissions of 644, as that prevents other users from being able to modify the file. This isn't always true, though- executable files need the executable bit and logs need to be writeable by their process.
Here's a quick overview on directories and unix permissions: https://unix.stackexchange.com/questions/21251/why-do-directories-need-the-executable-x-permission-to-be-opene
I have a file that I want to transfer to a remote machine that is running W7 32 bit
I have a script that enables me to push the file to the machine from a linux management server, using a combination of:
1) smbclient to mount the Admin share on the W7 machine
2) winexe to move the file to the location I require
This leaves me with the file in the correct location, but owned by the Admin user - whereas I need it to be editable by a standard user, User1
I have been trying to resolve this by using icacls
Using winexe I can run this remotely on the W7 machine. Initially I tried setting the poermissions to "Full" for the user account:
icacls c:......\myFile /grant User1:F
Checking this from the command line showed that it had apparently worked:
icacls c:......\myFile
c:......\myFile User1:(F)
However, from the windoes desktop, the file properties dialogue showed User1 having only read permissions, and anything else gave access denied.
My next attempt was:
icacls c:......\myFile /setowner User1
However, when logged in to the windows desktop as User1, attempting to delete or edit the file now tells me that doing so requires permission from User1....which is a bit peverse, since I am logged in as User1....
Any ideas?
This may or may not help, but I was unable to delete a file I copied from a Linux machine to a Windows shared folder - was getting a 'need Administrator permission' type error.
I was trying to solve this with the smbclient -c "setmode -r;" option, but when this didn't work I realised the Windows folder itself was set for read-only access for all but Administrator level.
I have a cloud hosting linux solution. I had vsftpd working on it, but after having issues and tinkering with a lot of settings, I now have an issue where users can login using FTP and connect to the correct home directory, navigate within it, download files but they cannot upload files to the server. They get a time out error, which appears to be a permissions error, but I can't narrow it down any more than that. /var/logs/syslog gives nothing away.
The folders belong to the users. The parent www folder is set to 555. Can anyone help with this issue at all?
Cheers,
T
Try to set the permissions to 755, 555 doesn't allow writing for anyone. Are your user and group different?
You also may need to enable logging for FTP server. The time out error may include some other errors, not only permission denied.
To have extended logging change the variables in your ftp config file:
dual_log_enable=YES
log_ftp_protocol=YES
xferlog_enable=YES
syslog_enable=NO
and check the log file name there.
you must create a folder into user folder (Example : /var/www/user1/upload).
and set permission 777 (Example : chmod 777 /var/www/user1/upload).
then upload file into this folder.
Using a fresh installation of CENTOS 6.2, when I connect to the server ( SFTP mount with nautilus ) and edit files, no matter what permission the file had before, it is reset to 700, read+write+execute only for the owner.
When SSHing directly into the machine and editing files on the command line - no permissions are changed.
The files I am editing are website scripts sitting in my Apache folders.
Why is this behavior happening? Any suggestions are welcome.
Your FTP client might be "downloading and reuploading" your files when you edit them. Change your umask if you want different permissions, or use SSH and a proper editor if you want to keep the permissions...