Is chmod 757 safe? - security

As i am on a shared host , i want to add a image hosting script and it seems that with 755 it doesnt allow me to upload images, so i changed the folder to 757 , is it safe to chmod to 757?

In a word, no. In two words, "hell. no!"
Let's interpret 757: that would be
owner: read write execute
groups that have permissions on the file: read - execute
the rest of the freaking world: read write execute
now, consider someone malicious uploading a short shell script:
#!/bin/sh --
rm -rf /
Update
Aha, the "folder". Okay, here's the deal: if you don't have the execute bit set on a directory, that blocks searching the directory. The reason the host is asking you to do the world=RWX is that they aren't running the web server as you, so they're taking the simple and dumb route to fix it.
There are two possibilities here:
they have some scheme in place to make sure that the permission of uploaded files in that directory can't have the execute bit set
they don't and haven't gotten burned yet
Here's an article on what better methods are.
On the assumption that your hosts aren't fools, see what happens with 775.

Related

Execute a bash script without typing ./ [duplicate]

I feel like I'm missing something very basic so apologies if this question is obtuse. I've been struggling with this problem for as long as I've been using the bash shell.
Say I have a structure like this:
├──bin
├──command (executable)
This will execute:
$ bin/command
then I symlink bin/command to the project root
$ ln -s bin/command c
like so
├──c (symlink to bin/command)
├──bin
├──command (executable)
I can't do the following (errors with -bash: c: command not found)
$ c
I must do?
$ ./c
What's going on here? — is it possible to execute a command from the current directory without preceding it with ./ and also without using a system wide alias? It would be very convenient for distributed executables and utility scripts to give them one letter folder specific shortcuts on a per project basis.
It's not a matter of bash not allowing execution from the current directory, but rather, you haven't added the current directory to your list of directories to execute from.
export PATH=".:$PATH"
$ c
$
This can be a security risk, however, because if the directory contains files which you don't trust or know where they came from, a file existing in the currently directory could be confused with a system command.
For example, say the current directory is called "foo" and your colleague asks you to go into "foo" and set the permissions of "bar" to 755. As root, you run "chmod foo 755"
You assume chmod really is chmod, but if there is a file named chmod in the current directory and your colleague put it there, chmod is really a program he wrote and you are running it as root. Perhaps "chmod" resets the root password on the box or something else dangerous.
Therefore, the standard is to limit command executions which don't specify a directory to a set of explicitly trusted directories.
Beware that the accepted answer introduces a serious vulnerability!
You might add the current directory to your PATH but not at the beginning of it. That would be a very risky setting.
There are still possible vulnerabilities when the current directory is at the end but far less so this is what I would suggest:
PATH="$PATH":.
Here, the current directory is only searched after every directory already present in the PATH is explored so the risk to have an existing command overloaded by an hostile one is no more present. There is still a risk for an uninstalled command or a typo to be exploited, but it is much lower. Just make sure the dot is always at the end of the PATH when you add new directories in it.
You could add . to your PATH. (See kamituel's answer for details)
Also there is ~/.local/bin for user specific binaries on many distros.
What you can do is add the current dir (.) to the $PATH:
export PATH=.:$PATH
But this can pose a security issue, so be aware of that. See this ServerFault answer on why it's not so good idea, especially for the root account.

Best practices in assigning permissions to web folders

I would like to know what is the best, correct and recommended way of doing chown and chmod to website files and folders.
I recently started working on linux and I have been doing it in the site root directory like the following:
sudo chown www-data:www-data -R ./
sudo chmod 775 -R ./
I know it is not the best way. There is a protected folder which should not be accessible with browsers and should not be writable, so I did the following to protected folder:
sudo chown root:root -R protected/
sudo chmod 755 -R protected/
Is it correct? If anything can be improved please let me know.
Read your command again. What you are saying is "make everything executable" below these directories. Does an HTML or gif to be executable? I don't think so.
Regarding a directory which should not be writable by the webserver. Think of what you want to do. You want to revoke the right to write a directory from the webserver and the webserver group (and everybody else anyway). So it would translate to chmod -w theDir. What you did is to tell the system "I want root to make changes to that directory which shall be readable by everybody and the root group". I highly doubt that.
So I would suggest having the directory owned by a webserver user with only minimal read access, it should belong to a group (of users, that is) which is allowed to do the necessary of the modification. The webserver does not belong to that group, as you want the outside world to be prevented from making modifications. Another option would be to hand over all the directories to root and to the editor group and modify what the webserver can do via the "others" permission group. But what to use heavily depends on your environment.
Edit:
In general, the "least rights" policy is considered good practice: give away as few rights as possible to get the job done. This means read access to static files and depending on your environment php files, read and execute rights for cgi executables and read and execute rights for directories. Execute rights for directories allow you to enter and read it. No directory in the document root should be writable by the webserver ever. It is a security risk, even though some developers of bigger CMS do not seem to care to much about that. For temporary folders I would set the user and groups to nobody:nogroup and set the sticky bit for both user and groups.

Linux - Giving all users access to a folder (and all folders and files below)

I know this seems like its a question I could just google, but I've tried and to be honest I'm still stuck.
The question I'm trying to solve asks...
Your current directory is sample_dir. Add the permission (using
symbolic) for gen_ed so that all users can access the file cars2:
The path is stenton/gen_ed/cars2 from the working directory.
So naturally, I assumed it was:
chmod -R ugo+r stenton/gen_ed, however that fails. I've tried a ton of iterations on the same line of thinking, but they've all failed.
Can someone please end this torment!
chmod go+x stenton/gen_ed
THIS should do the job!
What bit has to be set for a directory to be accessible (traversable) by all users?
For a file to be accessible, a user must be able to traverse all the directories from the root to that file. Can they?
Since it's an assignment, I won't give you the answer.
For anyone who is still searching for the correct answer for this question, it is:
chmod a+x stenton/gen_ed

about .plan! How to execute programs within the .plan file

I am currently learning LINUX commands and I am wondering how to run commands within the .plan file.
For example i want to a message as would be output from the ~stepp/cosway programs.
I typed ~stepp/cosway "HELLO" but it didn't work. What is the command for that?
Also how do I set all files in the current directory and all its subdirectories recursively to have a group of admin?
The .plan file is a plain text file that is served by the fingerd daemon. For security reasons, it's not possible to execute commands from that file, unless you modify and recompile fingerd on your machine to do so.
Concerning the second part of your question, use chgrp:
$ chgrp -R admin *

Why can't my apache process write to my world-writeable file?

I'm having this problem and I reached a deadlock, I would try anything I've reached a deadend. My problem goes like this:
I have a Perl/CGI script installed on Fedora 9 machine running apache2, this script have a config file which placed in the same directory, this config file has 777 permissions.
The script can't write to the file. It can read but in no way could I get it to write to it. The file is owned by the same user the apache is running. I wrote a small PHP script to test and placed it in the same folder. The PHP script can read but can't write to it.
I'm so desperate here and I don't know where to start with problem, so any help to get me on the right way would be appreciated.
EDIT: I can open the file for editing from command line; it is apache who can't access it
EDIT2: the folder hierarchy /var/www/cgi-bin/script
permissions are like this
/var root 755
www root 755
cgi-bin root 755
script apache 755
EDIT: The problem was in selinux. I disabled it and the script had access to the file thanks for everyone contributed
Thanks in advance
Does apache run with some selinux profile or similar that prevents it writing in that directory?
The user apache probably doesn't have permission to one of the parent directories. It needs to have at least execute permission in all of the directories up to and including the directory that contains your file.
EDIT: Right, considering this is a programming site, some code might be in order.
Use the absolute path to the file to test, not the relative one to make sure you're in the right directory.
$! should print out a "Permission Denied" error if it is permissions, can you print out the problem with:
open(FILE, ">/path/to/file/config.ini") || die "Cannot open: $!";
...
close(FILE);
Maybe some other process has a write lock to file? Try lsof to see who is holding it open.
Does the directory allow permission for the webserver to write files there?
I know that a previous post touched on this, but I think it bears repeating: When discussing a problem of this nature it's helpful to include the relevant code and the output of the exception. If an I/O operation fails, $! should contain the system error message, which would explain why the operation failed. Saying "it didn't work" doesn't really give us anything to go on.

Resources