How to autostart this application during startup on Linux? [duplicate] - linux

This question already has answers here:
How to run Java application on startup of Ubuntu Linux
(2 answers)
Closed 3 years ago.
We have an application and to get that started we need to do steps each time the server gets security patches. I wonder if it is possible to start the application automatic each time the server boots.
Currently we are doing this:
Login in to the server with putty
sudo su - user
Now is the tricky point, during this "sudo su - user", the .profile of user is loaded and in this .profile this is done:
export JAVA_HOME="/opt/java"
. /opt/application/config/profile
umask 077
And then we start the applications:
/opt/tomcat/bin/startup.sh
/opt/config/start
/opt/config/start-base prod
Does anybody know if this is possibe?
The last three steps are no problem but I don't know about the step that is done about loading the extra profile that is in the .profile of the user "user".
Is it possible to put this all into a script, so that we only have to execute the script during the startup of the server?

Try to use cron daemon.
sudo corntab -e
This will open a kind of editor, then, you have to write these lines:
#reboot /opt/tomcat/bin/startup.sh
#reboot /opt/tomcat/bin/startup.sh
#reboot /opt/config/start-base prod

Related

How to add user input when starting a service in systemd

I have a service configured in systemd that runs a binary file and it runs constantly. The problem is that after running this binary file, you have to confirm Terms&Conditions by typing y in the terminal and validating it by click Enter. I cannot run this file through this, because the systemctl status appears to me as failed, because of lack of validation. Does anyone know how I can run this service and automatically accept Terms in terminal?
I figured it out in such a way:
I created .sh file in usr/bin with this content:
#!/usr/bin/bash
yes | /home/marek/webcash/webminer
Then I created config file in systemd with ExecStart: /path/to/file.sh
and now it works - systemd is running correctly, the logs are logging, the answer "yes" was typed only once in binary file when the user prompt appeared.

Dockerize user sessions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking for some help on how to dockerize user sessions in Linux. What I am looking for is how would I make it so when someone ssh's into an account and does anything, when they exit anything they did isn't saved; it's how I have it set up next time someone else ssh's into it.
It's for a CTF event I've been tasked with setting up and with really no knowledge of most of what I have to do this whole process is a learning experience for me.
A good explanation of how I am hoping to have it set up is explained here: http://overthewire.org/help/sshinfra.html
So you can do that by creating a new docker based shell for the user
Creating the user
First we create the user using below command
sudo useradd --create-home --shell /usr/local/bin/dockershell tarun
echo "tarun:tarunpass" | sudo chpasswd
sudo usermod -aG docker tarun
Creating the shell
Next create a shell file /usr/local/bin/dockershell
#!/bin/bash
docker run -it --rm ubuntu:latest /bin/bash
And then chmod +x /usr/local/bin/dockershell. Now you can ssh to your vm with the new user
$ ssh tarun#vm
tarun#vm's password:
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-66-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
Last login: Sun Oct 1 06:50:06 2017 from 192.168.33.1
Starting shell for tarun
root#79c12f002708:/#
This takes me to the docker container and no session changes are saved. If you want to secure it even more, you should be user namespace remapping
https://success.docker.com/KBase/Introduction_to_User_Namespaces_in_Docker_Engine
when they exit anything they did isn't saved
That is because the writable layer of a container is discarded when the container stops.
You should make sure your container is run with a bind mount or (better) a volume: that way, the modification done during the ssh, if done in the right (mounted) path, would persists.

Keep SSH running on Windows 10 Bash

I am having a problem keeping SSH running on the Windows Subsystem for Linux. It seems that if a shell is not open and running bash, all processes in the subsystem are killed. Is there a way to stop this?
I have tried to create a service using nssm but have not be able to get it working. Now I am attempting to start a shell and then just send it to the background but I haven't quite figured out how.
You have to keep at least one bash console open in order for background tasks to keep running: As soon as you close your last open bash console, WSL tears-down all running processes.
And, yes, we're working on improving this scenario in the future ;)
Update 2018-02-06
In recent Windows 10 Insider builds, we added the ability to keep daemons and services running in the background, even if you close all your Linux consoles!
One remaining limitation with this scenario is that you do have to manually start your services (e.g. $ sudo service ssh start in Ubuntu), though we are investigating how we might be able to allow you to configure which daemons/services auto-start when you login to your machine. Updates to follow.
To maintain WSL processes, I place this file in C:\Users\USERNAME\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\wsl.vbs
set ws=wscript.createobject("wscript.shell")
ws.run "C:\Windows\System32\bash.exe -c 'sudo /etc/rc.local'",0
In /etc/rc.local I kick off some services and finally "sleep" to keep the whole thing running:
/usr/sbin/sshd
/usr/sbin/cron
#block on this line to keep WSL running
sleep 365d
In /etc/sudoers.d I added a 'rc-local' file to allow the above commands without a sudo password prompt:
username * = (root) NOPASSWD: /etc/rc.local
username * = (root) NOPASSWD: /usr/sbin/cron
username * = (root) NOPASSWD: /usr/sbin/sshd
This worked well on 1607 but after the update to 1704 I can no longer connect to wsl via ssh.
Once you have cron running you can use 'sudo crontab -e -u username' to define cron jobs with #reboot to launch at login.
Just read through this thread earlier today and used it to get sshd running without having a wsl console open.
I am on Windows 10 Version 1803 and using Ubuntu 16.04.5 LTS in WSL.
I needed to make a few changes to get it working. Many thanks to google search and communities like this.
I modified /etc/rc.local as such:
mkdir /var/run/sshd
/usr/sbin/sshd
#/usr/sbin/cron
I needed to add the directory for sshd or I would get an error "Missing privilege separation directory /var/run/sshd
I commented out cron because I was getting similar errors and haven't had the time or need yet to fix it.
I also changed the sudoers entries a little bit in order to get them to work:
username ALL = ....
Hope this is useful to someone.
John Butler

Switching users from root in a Rundeck job (Cannot create session: Already running in a session)

I am trying to test a scheduled job on Rundeck by running specific commands on a 16.04 Ubuntu box, and one of those will be to switch the user from root to nodeworker.
the sequence is:
Accessing the right directory as root
cd /var/www/... (Runs with no issues)
Switching to user nodeworker, no password needed
su nodeworker
running the command git pull origin master
I tried running it with sudo su - nodeworker -c "command here", same issue, that did not work either. I ended up tailing the auth.log to find that su is giving an error for starting a session when the root session is existing, and I have no idea of a fix for it:
pam_systemd(su:session): Cannot create session: Already running in a session
And I found this issue reported for Debian, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825949
same here with rundeck user. it's used to work but not anymore. Workaround solution is creating the user (system type) manually before install the rundeck

How can I run Jboss as a daemon on a virtual machine?

What I've done so far according to these instructions is unziped and moved jboss into my /usr/local/ directory. Then I put the jboss_init_redhat.sh script in /etc/init.d/ as jboss and edited the script to meet my configurations. I then run /etc/init.d/jboss start and all it says is
JBOSS_CMD_START = cd /usr/local/jboss-4.2.3.GA//bin; /usr/local/jboss-4.2.3.GA//bin/run.sh -c default -b 0.0.0.0
and then nothing happens. Also if I go into /usr/local/jboss-4.2.3.GA/bin and run run.sh it starts the server but when I go to the vm's IP:8080 in my browser I still get nothing. Any help would be appreciated also I don't know much about doing this so excuse my inexperience.
Init scripts should be owned and started by root.
The init script you use uses su (better would be to runuser) to change to the jboss user.
The jboss user itself does not have permission to do that.
The jboss user also does not have permission to write to /var/run etc.
So run sudo /etc/init.d/jboss start (you need to set up sudo first to allow this) or change to the root account and execute /etc/init.d/jboss start.
If it still fails check the logs at /usr/local/jboss-4.2.3.GA/server/default/log.
Hope this helps.

Resources