I have created a web application where user can run Java code in the browser.
I am using chroot for executing user submitted code in the web server.
In the chroot script I am doing mounting and then unmounting some required directories.
this works very well normally but when I fire that executing requests in a row like
20-30 requests, then for some response I am getting this message /bin/su: user XXX does not exist where XXX is username for the Linux system where I am mounting the required directories.
While for others I am getting the expected output result.
My concern is "is there any side effect of doing mount and unmount repeatedly in the Linux box?
Or is there any setting in the Linux to make this config to support?
In order to use /bin/su you need to have the user information provided by /etc/passwd. Have you mounted that directory or (as I would recommend) copied it to the /etc/ in the new root directory?
Concerning your mount issues, yes, mounting and unmounting can take some time and is not guaranteed to be instantaneous (especially the unmounting can plainly fail if something is still active on the mounted file system). So maybe you should check if the unmount failed and retry in that case.
Thanks for the reply...Yes you are absolutely right Alfe! it is the problem of mounting/unmounting in a row. I have checked this by SSH login to my web server. when I executed 20-30 program commands repeatedly(separated by semicolon) then I got the desired output in a sequence on my window . then I opened another SSH window and again I executed 10 commands from that window and 20 commands from previous window . when I saw the output then for some commands in both the windows I got that message of "/bin/bash user XXX doesnt exist". so one conclusion is that when I make web requests concurrently then execution of commands(chroot/unchroot) are not in a sync. that's why I am getting this message. I am not very good in Linux . I don't know How can I address this issue.
Related
linux security and root access question....
I'm setting up a server that has a validator node running on it for a substrate-based blockchain. I was trying to harden the security of my server. I set up ufw for all ports but those necessary for the node to operate. I set up 2FA, SSH with ed25519, and then I was spending time trying to figure out, if for some crazy reason someone got in... how could I stop someone from using systemctl or poweroff with sudo privilages. The goal is maximize uptime and remain in sync with the other nodes at all times.
Anyways, I started blocking bash commands for the user account that allows SSH and blocked SSH to root. Then I blocked a few more commands and thought, what if someone could find their way around this? So, I just started blocking too many things lol. Even though I disabled sudo for the user and blocked a number of commands the user could still use systemctl and stop the service for the node. Eventually I found this guide on how to only allow a few commands for a user.
Update: I didn't properly remove the user from the sudo group. Afterwards they could still use systemctl but the system then allowed systemctl to pop up with an input for the root user password for authentication. Anyways, I just wanted something simple yet secure sooo....
I ended up removing all of the commands from the user and symlinked the su command and renamed it to a random command that only I know. All of the other commands done by the user respond with
-rbash: /usr/lib/command-not-found: restricted: cannot specify /' in command names
I took away bash history and bash autocomplete/tab completion. Now the only thing you can do is guess commands that will get you to the point where you still have to get past my root password. Is there a way for hackers to scan for available commands when there is only one available that is masked in this way?
Anyways, I'm saying all of this because I have always heard best security practices involve "disabling root". Sometimes I see it as just disable root SSH, which i already have done, but sometimes i read it like disable the root account. Some say disable the password and try to divvy it up with sudo privileges so it's more traceable to individual users.
In my case I need to preserve root access in some way but I basically hid everything within the root user. So, if anyone gets access to root it's over. But, it's behind 2FA, SSH, and an unknown command that just gets to where you can try a password to access root.
Am I thinking about this "disable root for security" all wrong and I should disable it completely or does it make sense what I've done so far?
You can also create a SSH key and use this to login to a Linux server, instead of using a password, and do not share your private key.
The following link is a tutorial on how to create a SSH key one, https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/
You could also add user filtering with AllowUsers option in sshd_config file:
AllowUsers admin1#192.168.1.* admin2#192.168.1.* otherid1 otherid2
This allows admin1 and admin2 only from 192.168.1.* addresses and otherid1, otherid2 from anywhere.
Let's state a situation:
I have the possibility to run arbitrary commands on a server as an unprivileged user, through "unconventional means".
I do not have the possibility to login using ssh to that server, either as my unprivileged user or anything else. So I do not have currently a CLI allowing me to run any commands I would like in a "normal" way.
I can ping that server and nothing prevents me to connect to arbitrary ports.
I still would like to have a command line to allow me to run arbitrary command as i wish on that server.
Theoretically nothing would prevent me to launch any program as my unprivileged user, including one that would open a port, allow some remote user to connect to it and just forward any commands to bash, returning the result. I just don't know any good program to do that.
So, does any one know? I looked at ways to launch ssh_server as an unprivileged user but some users reported that recent versions of ssh_server do not allow that anymore. Actually I don't even need ssh specifically, any way to get a working CLI would do the trick. Even a crappy node.js program launching an http server would work, as long as I have a CLI (... and it's not excessively crappy, the goal is to have a clean CLI, not something that bugs every two characters).
In case you would ask why I would like to do that, it's not related to anything illegal ^^. I just have to work with a very crappy Jenkins server for which I'm not allowed to have direct access to its agents. Whoever is responsible for that server doesn't give a sh** about its users' needs so we have to use hacky solutions just to have some diagnostic data about that server (like ram, cpu and disk usage, installed programs, etc...). Having a CLI that I can launch some time instead of altering a build configuration and waiting 20 minutes to have an answer about what's going on would really help.
Thanks in advance for any answer.
So do you have shell access to the server at least once? E.g., during the single day of the month when you are physically present at the site of your client or the outsourcing contractor?
And if you have shell access then, can you or your sysmin install Cockpit?
It listens on port 9090.
You can then use the credentials of your local user and open a terminal window in your browser. See sidebar item "Terminal" on the screenshots of the cockpit homepage.
According to the documentation
Cockpit has no special privileges and doesn’t run as root. It creates a session as the logged in user and has the same permissions as that user.
I have a Windows Server 2019 installation with an LDAP instance (nfsmappingstore) for nfs mapping. I created this with the powershell cmdlet Install-NfsMappingStore.
To illustrate, here is a list of the users in that store, and a test of one user:
I have an NFS Share setup as illustrated here:
When I turn on the option circled called "Enable unmapped user access", with the sub-option "Allow unmapped user Unix access (by UID/GID)", then I can go to my uBuntu 18.04 machine and mount that successfully with the command:
sudo mount -t nfs server:/AutoProv mnt
I can then see the files and folders in the share.
However, when I turn that option off, wishing to actually use the mapped user functionality, I get the error:
root#br-dv-ss-l01:/home/steve# mount -vvvv -t nfs server:/AutoProv mnt
mount.nfs: timeout set for Fri Apr 2 18:28:11 2021
mount.nfs: trying text-based options 'vers=4.2,addr=10.200.225.1,clientaddr=10.200.225.104'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.1,addr=10.200.225.1,clientaddr=10.200.225.104'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting server:/AutoProv
root#br-dv-ss-l01:/home/steve#
I think this means that the uid/gid was not really sent or interpreted by Windows Server 2019. Looking at the event logs on the server, it seems to indicate that it is happy and reading the LDAP instance OK, and the Test cmdlet gives no errors.
The one possible altered thing I could think to do that seemed to cause a very slightly different effect was to add the "-o nfsvers=3" to the mount command. When I did that, the share did actually mount, but the NFS server refused to let me see anything inside of the share:
Can someone guide me as to how to investigate this issue further? At this time I do not know how to verify what the Windows Server is getting as far as UID/GID, so I really don't know which side of this the issue is on.
Thank you!
Incidentally, we never got any answer on this. LDAP option appears not to work at all. However, the passwd, group file mapping option works great, and we switched to that.
The files have to be named 'group' and 'passwd' lowercase with no extension. They have to be placed in 'C:\Windows\System32\drivers\etc'. Syntax for 'group':
machinename\PodGroup:x:2501:2500,3500
machinename\MyLocalWindowsGroup:x:2502,3500
domain\ACLname:x:2503:0,2500
or generally:
groupname:x:GID:UID1,UID2,etc....
Essentially this is the same mapping that Linux uses as described here: https://www.thegeekdiary.com/etcgroup-file-explained/
So you define a local group on your Windows Server that is running NFS and reference that in the first entry of the line, or use a domain ACL group there. Each 'column' of each line is separated by colons. The second entry 'x' is a description field I think and not used. Third is the GID you want the windows group to be mapped to.
For passwd:
domain\stevesims:x:0:0:Root User,,,:c:\users\stevesims
domain\johndoe:x:2500:2501:Pod User,,,:c:\users\johndoe
domain\janedoe:x:3500:2501:Pod User,,,:C:\users\janedoe
or generally:
username:x:UID:GID:desc,,,:WindowsPathToItsProfile
Once these files are in place on the NFS Server you must restart the NFS Server service or it will not reread the file.
After it is restarted, it will read the file, and if a Linux machine writes a file to the NFS Server, it will be treated as if it has the permissions of the matching Windows account or group from these files.
Today, my MongoDB database went down after weeks of it being up. After some digging around, I realized that the permissions of my mongodb-27017.sock file were incorrect.
Running chown mongod:mongod mongodb-27017.sock resolved the issue.
My MongoDB instance was running perfectly fine for weeks. How did the permissions all of a suddenly change? How can I prevent myself from running into this issue again?
For context: I'm running an Amazon Linux 2 instance on AWS.
After almost one year of flawless working, one of our replica members received an error and it was related to this, mongo.sock owner changed from mongod:mongod to root:root.
I started searching that if I can find what changed the files owner but unfortunately there's no way to find it after it happened.
So my search lead me to auditctl.
According to man page the description is, used to control the behavior, get status, and add or delete rules.
By setting up it like audictl -w /tmp -p rwxa -k keyname, I started waiting.
Wrote a simple shell script to notify me when audictl finds out what's changing the ownership of the file. After couple hours, I received it.
With the output of the audictl, you can find information like pid, syscall and uid etc.
But the most important one is comm which tells you which program used and caused this change.
For my situation by following the audictl logs, I found out that co-worker of mine just created a cronjob that effects mongo.sock file.
How come I always get
"GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)"
when I start 'gedit' from a shell from my superuser account?
I've been using GUI apps as a logged-in user and as a secondary user for 15+ years on various UNIX machines. There's plenty of good reasons to do so (remote shell, testing of configuration files, running multiple sessions of programs that only allow one instance per user, etc).
There's a bug at launchpad that explains how to eliminate this message by setting the following environment variable.
export DBUS_SESSION_BUS_ADDRESS=""
The technical answer is that gedit is a Gtk+/Gnome program, and expects to find a current gconf session for its configuration. But running it as a separate user who isn't logged in on the desktop, you don't find it. So it spits out a warning, telling you. The failure should be benign though, and the editor will still run.
The real answer is: don't do that. You don't want to be running GUI apps as anything but the logged-in user, in general. And you never want to be running any GUI app as root, ever.
For some (RHEL, CentOS) you may need to install the dbus-x11 package ...
sudo yum install dbus-x11
Additional details here.
Setting and exporting DBUS_SESSION_BUS_ADDRESS to "" fixed the problem for me. I only had to do this once and the problem was permanently solved. However, if you have a problem with your umask setting, as I did, then the GUI applications you are trying to run may not be able to properly create the directories and files they need to function correctly.
I suggest creating (or, have created) a new user account solely for test purposes. Then you can see if you still have the problem when logged in to the new user account.
I ran into this issue myself on several different servers. It I tried all of the suggestions listed here: made sure ~/.dbus had proper ownership, service messagbus restart, etc.
I turns out that my ~/.dbus was mode 755 and the problem went away when I changed the mode to 700. I found this when comparing known working servers with servers showing this error.
I understand there are several different answers to this problem, as I have been trying to solve this for 3 days.
The one that worked for me was to
rm -r .gconf
rm -r .gconfd
in my home directory. Hope this helps somebody.