I have an application which generates few files and get stored locally on the system. I need to protect that generated files from getting deleted by any user in the system. By setting the permission to that particular folder containing files, still I cannot stop Admin from deleting the files. Is there any way to achieve this in Windows/Mac/Linux.
No, there is no practical way to prevent someone with sufficient credentials from deleting the file. You can make it difficult, but you can't make it impossible unless you patch the OS and/or firmware yourself. And without trying to sound too condescending, is the type of thing where if you have to ask how to do it, you won't be able to do it.
This is as it should be: the user owns the computer, you don't. They should have complete control over it.
Related
I want to automate testing of my users' source code files by letting them upload c++,python, lisp, scala, etc. files to my linux machine where a service will find them in a folder and then compile/run them to verify that they are correct. This server contains no important information about any of my users, so there's no database or anything for someone to hack. But I'm no security expert so I'm still worried about a user somehow finding a way to run arbitrary commands with root privileges (basically I don't have any idea what sorts of things can go wrong). Is there a safe way to do this?
They will. If you give someone the power to compile, it is very hard not to escalate to root. You say that server is not important to you, but what if someone sends you an email from that server, or alters some script, to obtain some info on your home machine or another server you use?
At least you need to strongly separate you from them. I would suggest linux containers, https://linuxcontainers.org/ they are trendy these days. But be careful, this is the kind of service that is always dangerous, no matter how much you protect yourself.
Read more about chroot command in Linux.
This way you can provide every running user program with separate isolated container.
You should under no circumstances allow a user to run code on your server with root privileges. A user could then just run rm –rf / and it would delete everything on your server.
I suggest you make a new local user / group that has very limited permissions, e.g. can only access one folder. So when you run the code on your server, you run it in that folder, and the user can not access anything else. After the code has finished you delete the content of the folder. You should also test this vigorously to check that they really cant destroy / manipulate anything.
If you're running on FreeBSD you could also look at Jails, which is sort-of a way of virtualization and limiting a user / program to that sandbox.
I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'
I want to encrypt a folder by encfs or ecryptfs in linux. I can do it, but i want just specific process can access to it and decryption accrues automatically for that process.
No key to encryption needed by process.
Can any help me?
File systems are made exactly for the idea to allow access for more than one process. To want to restrict this access now to only one process is somewhat the opposite of this idea, so it won't be smooth, however you solve your task.
A much more straight-forward way if you want just one process have access would be to not use a file system but a database or just the contents of a single file. This way it would be easy to restrict the access to exactly one process.
If you want to stick to the encfs (or similar) you could let the process run as a specific user which should be the only user to have read and execute permissions on the mounted file system's root.
this is relative to tomcat + spring + linux. I am wondering what could be a good practice and place to store files. My idea is to put everything on the filesystem then keep track of them using the DB. My doubt is WHERE? In fact I could put everything in the webapp directory, but that way some good collegue or even me, could forget about that and erase everything during a clean+deploy. The other idea is to use a folder in the filesystem... but in Linux which one would be standard for this? More than this, there is the permission problem, I assume that tomcat runs as the tomcat user. So it can't create folders around in the filesystem at will. I'd have to create it by myself using root user and then changing the owner.... There is nothing wrong with this, but I'd like to automate the process, so that no intervention is needed. Any hints?
The Filesystem Hierarchy Standard defines standard paths for different kinds of files. You don't make it absolutely clear what kind of files you're storing and how they're used. At least
/srv/yourappname
/var/lib/yourappname
would be appropriate.
As for the privileges, you'll either have to create the directories with proper privileges during installation. If that's impossible, settle for the webapps directory.
I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it.
I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely.
Here are some basic rules
The system has multiple users.
We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable)
I don't mind anyone with superuser access looking at our stored password
Here is what i've got so far
Store password in password.txt
$chmod 600 password.txt
Process reads from password.txt when it's needed
Buffer overwritten with zeros when it's no longer needed
Although I'm sure there is a better way.
This is not a solution for cryptography. No matter the cipher used, the key will be equally accessible to the attacker. Cyrpto doesn't solve all problems.
chmod 400 is best, this makes it read only. chmod 600 is read write, which may or may not be a requirement. Also make sure its chown'ed by the the process that needs it. This is really the best you can do. Even if you are sharing the machine with other users they shouldn't be able to access it. Hopefully this is a dedicated machine, in that case there isn't much of a threat. SELinux or AppArmor will help harden the system from cross process/cross user attacks.
Edit:
shred is the tool you need to securely delete files.
Edit: Based on Moron/Mike's comments the unix command ps aux will display all running processes and the command used to invoke them. For instance the following command will be exposed to all users on the system: wget ftp://user:password#someserver/somefile.ext. A secure alternative is to use the CURL library. You should also disable your shells history. In bash you can do this by setting an environment variable export HISTFILE=
You're not far from the best approach given your constraints. You have two issues to deal with. The first is password storage. The second is using the password securely.
Dealing with the second one first -- you have a huge issue in the use of the command line program. Using options to the 'ps' command, a user can see the arguments used in running the command line program. From what you've written, this would contain the password in plain text. You mention this is an unchangeable interface. Even so, as an ethical programmer, you should raise the issue. If this were a banking application handling financial transactions, you might consider finding another job rather than being part of an unethical solution.
Moving on to securely storing the password, you don't mention what language you are using for your batch files. If you are using a shell script, then you have little recourse than than to hard code the password within the shell script or read it in plain-text from a file. From your description of storing the password in a separate file, I'm hoping that you might be writing a program in a compiled language. If so, you can do a little better.
If using a compiled language, you can encrypt the password in the file and decrypt within your program. The key for decryption would reside in the program itself so it couldn't be read easily. Besides this, I would
chmod 400 the file to prevent other users from reading it
add a dot prefix ('.') to the file to hide it from normal directory listing
rename the file to make it less interesting to read.
be careful not to store the key in a simple string -- the 'strings' command will print all printable strings from a unix executable image.
Having done these things, the next steps would be to improve the key management. But I wouldn't go this far until the 'ps' issue is cleared up. There's little sense putting the third deadbolt on the front door when you plan to leave the window open.
Don't fill the password buffer with zeros, this is pointless. The kernel can decide to swap it to an arbitrary location in the swap file or say after allocation of some memory the kernel will move the page tables around, resulting in other page tables containing the password while you only have access to the new copy.
You can prctl(2) with PR_SET_NAME to change the process name on the fly. Unfortunately I can't currently think of any other way than injecting some code into the running process via ptrace(2), which means enemy processes will race to read the process list before you get a chance to change the new processes name :/
Alternatively, you can grab the grsecurity kernel patches, and turn on CONFIG_GRKERNSEC_PROC_USER:
If you say Y here, non-root users will
only be able to view their own
processes, and restricts them from
viewing network-related information,
and viewing kernel symbol and module
information.
This will stop ps from being able to view the running command, as ps reads from /proc/<pid>/cmdline
Said interface requires us to pass a
plain-text password over a command
line to connect to the database. This
is a terrible security practice, but
we are stuck with it.
It's only a bad security practice because of problems in the O/S architecture. Would you expect other users to be able to intercept your syscalls? I wouldn't blame a developer who fell into this trap.