While creating an rpm spec file I have created a new user and group in the %pre section. This new user does not however have permission to login from from shell for security purposes. Now when I install the rpm this new user is successfully created. However, I wish to start the installed rpm service with the newly created user. Currently I simply write; 'filePath/file.exe file.cfg' to execute the file.exe with its configuration file i.e. file.cfg in my 'init.d' file to start the service. How can I modify this command to start the same service but with the user that I created while installing the rpm? Basically I want to execute the program in my init.d file but through a different user, like I would have done with sudo if my required user was the super user. Any feedback will be highly appreciated.
Your initial starting point both for installing the rpm and for running the service is privileged. For instance, on my CentOS 6 machine, I see in /etc/passwd
games:x:12:100:games:/usr/games:/sbin/nologin
but running as root, I can do this:
$ sudo -u games /bin/sh
sh-4.1$ echo $PATH
/sbin:/bin:/usr/sbin:/usr/bin
sh-4.1$ id
uid=12(games) gid=100(users) groups=100(users) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
sh-4.1$ cd
sh-4.1$ pwd
/usr/games
In your service script, you can use sudo to run a given process as another user (though a quick check of the same machine does not show this being done).
#msuchy points out that runuser may be preferable. I see that this is relatively recent (according to Ubuntu runuser command?, appeared in util-linux 2.23 -- lacking a date makes release notes less than useful...). The oblique comments in its documentation about PAM make it sound as if this circumvents some of the security checks. Perhaps someone has a better comment about that.
Related
Is there any way to get vs code to work properly in linux? I can't run sudo code . because that gives me an error saying it's not secure to do so, I can't do anything within the editor to force doing things, like staging a file in git, or reloading a newly installed extension. I've googled around, and it seems nobody else has posted about this, and it seems highly unlikely that I'm the first to raise issue about this. (Take it easy on me, I'm a relatively new linux user). I'm trying to figure this out on Ubuntu 18.04 if that's relevant at all. My version of vs code is 1.30.2
I guess my main question is what's the right way to get applications like vs code to be able to perform tasks that required doing things without fighting the OS about sudo and privileges?
Launch via sudo from terminal
To launch VSCode as root --which is highly discouraged-- you must specify an alternate user data directory as follows:
$ sudo code --user-data-dir /path/to/alternate/folder
VSCode will automatically generate the required folders in the selected directory and launch with root privileges.
Change permissions to fix "permission denied" error
The solution in this case is to manually change the permissions of the two directories /home/$USER/.config/Code/ and /home/$USER/.vscode/. Perform these steps:
$ sudo chmod 755 /home/$USER/.config/Code
$ sudo chmod 755 /home/$USER/.vscode
To answer your other question:
If you really need to run several commands as root and you are annoyed by having to enter your password several times (when sudo has expired), just do sudo -i and you'll become root.
If you want to run commands using pipes, use sudo sh -c "comand1 | command2".
You may also want to take a look at this Ask Ubuntu answer about running applications as root.
I solve this problem using:
sudo chown -R YOUR_USER YOUR_PROJECT/
You basically need to tell the OS that you are the owner of the files you create. Use sudo chown <user name> <projects directory>
However, if you already created some files before applying chown, don't forget to change their permission also sudo chown <user name> <projects directory>/<file name>.
Question: How can I confirm whether or not my "Dedicated Server" is running properly?
Background: I am working to get a 'Dedicated CoreNLP Server' running on a stand-alone Linux system. This system is a laptop running CentOS 7. This OS was chosen because the directions for a Dedicated CoreNLP Server specifically state that they apply to CentOS.
I have followed the directions for the Dedicated CoreNLP Server step-by-step (outlined below):
Downloaded CoreNLP 3.7.0 from the Stanford CoreNLP website (not GitHub) and placed/extracted it into the /opt/corenlp folder.
Installed authbind and created a user called 'nlp' with super user privileges and bind it to port 80
sudo mkdir -p /etc/authbind/byport/
sudo touch /etc/authbind/byport/80
sudo chown nlp:nlp /etc/authbind/byport/80
sudo chmod 600 /etc/authbind/byport/80
Copy the startup script from the source jar at path edu/stanford/nlp/pipeline/demo/corenlp to /etc/init.d/corenlp
Give executable permissions to the startup script: sudo chmod a+x /etc/init.d/corenlp
Link the script to /etc/rc.d/: ln -s /etc/init.d/corenlp /etc/rc.d/rc2.d/S75corenlp
Completing these steps is supposed to allow me to run the command sudo service corenlp start in order to run the dedicated server. When I run this command in the terminal I get the output "CoreNLP server started" which IS consistent with the the start up script "corenlp". I then run the start command again and get this same response, which is NOT consistent with the start up script. From what I can tell, if the server is actually running and I try to start it again I should get the message "CoreNLP server is already running!" This leads me to believe that my server is not actually functioning as it is intended to.
Is this command properly starting the server? How can I tell?
Since the "proper" command was not functioning as I thought it should, I used the command sudo systemctl *start* corenlp.service and checked the service's status with sudo systemctl *status* corenlp.service. I am not sure if this is an appropriate way in which to start and stop a 'Dedicated CoreNLP Server' but I can control the service. I just do not know if I am actually starting and stopping my dedicated server.
Can I use systemctl command to operate my Dedicated CoreNLP Server?
Please read the comments below the originally posted question. This was the back and forth between #GaborAngeli and myself which lead my question/problem being solved.
The two critical steps I took in order to get my instantiation of the CoreNLP server running locally on my machine after following all the directions on how to setup a dedicated server, which are outlined on Stanford CoreNLP's webpage, are as follows:
Made two modifications to the "corenlp" start-up script. (1) added sudo to the beginning because the user "nlp" needs permissions to certain files on the system (2) changed the first folder path from /usr/local/bin/authbind to /usr/bin/authbind. authbind installation must've changed since the start up script was written.
nohup su "$SERVER_USER" -c "sudo /usr/bin/authbind --deep java -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir_"$CORENLP_DIR" -cp "$CLASSPATH" -mx15g edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 80"
If you were to attempt to start the server with the change above you would not successfully run server because sudo usage requires a password input. In order to allow sudo privileges without a required password entry you need to edit the sudoers file (I did this under the root user b/c you need permissions to change or even view this document). my sudoers file was located in /etc. There is a part that says ## Allows people in group wheel to run all commands and below that is a section that says ##Same thing without a password. You just need to remove the comment mark (#) form in front of the next line which says %wheel ALL+(ALL) NOPASSWD: ALL. Save this file. BE CAREFUL IN EDITING THIS FILE AS IT MAY CAUSE SERIOUS ISSUES. MAKE ONLY THE NECESSARY CHANGE OUTLINED ABOVE
Those two steps allowed me to successfully run my dedicated server. My system runs on CentOS 7.
HELPFUL TIP: From my discussion with #GaborAngeli I learned that within the 'corenlp' folder (/opt/corenlp if you followed the directions correctly) you can open the stderr.log file to help you in trouble shooting your server. This outputs what you would see if you were to run the server in the command window. If there is an error it is output here too, which is extremely helpful.
I recently noticed a fairly strange and for me unexpected behaviour in Xubuntu 12.04 and 14.04.
I was doing the following:
Testing if my user in in the group users, with
groups $USER
This is not the case by default. So I add my user to this group:
sudo usermod -a -G users $USER
I the can check the file /etc/group and will see my user added in the entry.
I then would like to give the group users access to some files, in my example the www and cgi-bin directory:
sudo chgrp users /var/www /usr/lib/cgi-bin
I also want that my group can write into the directories:
sudo chmod g+w /var/www /usr/lib/cgi-bin
I would assume, that I can now create a file in those directories, but I can't. Neither by commandline, nor by the standard filebrowser from Xubuntu.
Somewhere I read, that I need to logout from the terminal to make it work, so I close and reopen the commandline terminal, but it is still now working.
But: It I reboot the whole system everything works as it should...
Seriously??? Why is this, is it a bug or a feature and are there better ways then restarting the complete OS?
(I thought the strength of Linux is exactly that you don't need to reboot all the time like in other "popular" OS)
(Note: I have not tested this on other systems as e.g. Debian yet...)
Group memberships are inherited from process to process like many other things in a unixoid environment. That means a running shell will not be affected from such changes in the account configuration. Also just opening a new terminal or shell will not show the change, since it is spawned from an already running process, ultimately from the initial process started right after the login.
You have to re-run the login process instead. Either by restarting the graphical environment, or by doing a logout/login sequence when working on the virtual terminals. Also obviously rebooting will lead to a new login process.
The only direct alternative is to spawn a new login shell explicitly: bash -l for example does the trick: it re-executes all stuff run through at login time. But note that this only effects that started shell and processes spawned from it. It does not affect other already running processes. So you have a somewhat mixed environment then...
I have repackaged a Bash RPM to include automatic logging to syslog. I am trying to work out a way to set this up so that it is used ONLY when a user or service account runs a command as root. The option I'm looking at is installing this version of Bash to an alternate location, and then pointing root to use that version as it's default shell.
Can someone go through the process of installing this RPM to an alternate path and associating the root account to it as the default shell? I have been having difficulty finding a way to do this when searching online.
Since you are repackaging the RPM, it is probably best to change the destination path directly in the RPM.
As for the default shell, run chsh -s /path/to/your/bash root to change it.
Be aware that this solution may not work for all purposes though. For example, running a script that starts with #!/bin/bash will still execute it with /bin/bash instead of your default login shell.
I've got an SVN instance installed on a free EC2 AWS server. In short: I'm using LAMP.
Using what I read in this article and encountered the "you need a TTY" error as mentioned in the comments. I followed the second resource and it cleared the error message, but doesn't seem to be executing the script. When I manually run the script, however, it works.
Any clue what I'm missing?
When I followed the second resource to fix the TTY error I changed the contents of my /svn/repository/hooks/post-commit script from:
#!/bin/bash
sudo /usr/local/bin/svn-post-commit-update-mysite 1>&2
to:
#!/bin/bash
su –session-command=”/usr/local/bin/svn-post-commit-update-mysite 1>&2″ dynamic &
First possible issue:
You cannot rely on the value of the $PATH variable inside the hook. This means you need to specify complete paths for all executables.
In particular, "su" is a program located in "/bin/sh" in most distributions. To be sure, type
type su
Next possible issue:
Is your subversion server being run as root? su will try to ask for password if run by other users, and will fail if it's not being run interactively - even if the user is in the sudoers file!
If you are using Apache+DAV, this means the apache service must be run as root for this to work (instead of www-data), which is a serious security problem.
You probably don't need to use su or sudo at all if all of the files are owned by the same user (www-data, for instance). You can change the ownership of the site files with something like
sudo chown -R www-data:www-data /var/www/<my-project>
And then remove the sudo and su from both the hook and the svn-post-commit-update-mysite file.
My best guess would be that something in your script depends on the PATH environment variable. Subversion runs hooks in an empty environment for security reasons. So you need to either setup the environment in your shell script or use absolute paths.
You might want to read the Subversion book entry on implementing hook scripts. The particular issue I mentioned is explained in the information block.