I have a Tomcat 9 package of Ubuntu 20.04 installed.
Like in the manual here: https://salsa.debian.org/java-team/tomcat9/blob/master/debian/README.Debian
We need to set all the filesystems one by one (overriding).
--
Is there a way to disable the sandboxing or a way to include all filesystems or all the "/".
Or, is there a way to include a wildcard? Like all /home* (/home, /home02, /home03)
This is controlled by systemd .
The systemd tomcat9 service file on ubuntu is:
/etc/systemd/system/multi-user.target.wants/tomcat9.service
To disable the filesystem protections you'll need to change the ProtectSystem directive from the default of 'strict' to 'false'.
ProtectSystem=false
After that run:
sudo systemctl daemon-reload
sudo service tomcat9 restart
Related
NOTE: I am running Red Hat 6.7
I have a service that is configured with the Linux init system to start a process as a service when the machine boots. This was done by doing this one-time configuration from the command line:
ln -snf /home/me/bin/my_service /etc/init.d/my_service
chkconfig --add my_service
chkconfig --level 235 my_service on
When the OS reboots, the service starts as expected.
I ALSO need the service to be restarted if the service (my_service) crashes. From what I've read, all I need to do is add an entry to /etc/inittab that looks like this:
mysvc:235:respawn:/home/me/bin/my_service_starter
Where my_service_starter looks like:
#!/bin/bash
/home/me/bin/my_service start
My understanding is that when the init system detects that my_service is not running, it will attempt to restart it by running "my_service_starter".
However this does not seem to be working.
I need to understand how to tell the Linux init system to restart my service when the service crashes.
Given an entry like:
mysvc:235:respawn:/home/me/bin/my_service_starter
Then inittab will:
call /home/me/bin/my_service_starter
which will call /home/me/bin/my_service start
...and then exit, so init will thing your service has failed
so init will call /home/me/bin/my_service_starter again
...and so forth, which will result in init deciding that your script is respawning too fast, after which it will ignore it completely.
A process started by inittab is not expected to exit. If you really want to use inittab to maintain your service, you could remove /etc/init.d/my_service, and then in /etc/inittab you would have something like:
mysvc:235:respawn:/home/me/bin/my_service
And you would need to ensure that my_service runs in the foreground (some programs automatically daemonize by default, although these will often have some sort of --run-in-foreground flag).
If you upgrade to CentOS 7 or something else with systemd, this all becomes easier.
You can also investigate "third-party" process supervisors like "supervisord" or "runit" that you could use for process monitoring/restarting on CentOS 6.
Update
As mangotang points out, and I forgot, RHEL 6 actually shipped with upstart, even though it used almost exclusively SysV-style init scripts. So a better solution would be to create an upstart service instead. There are some reasonable getting-started docs here.
On RHEL 6.X, at top of the /etc/inittab file it says:
# inittab is only used by upstart for the default runlevel.
#
# ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM
RHEL 6.X uses Upstart instead of the System V init system. See the man pages for initctl(8) and initctl(5), or ask Google about Upstart.
I am new to centos 7. I want to install apache (httpd). so i command yum install httpd it says already installed. but i could find any file in etc directory.
There is folder in etc httpd but no file are there in this directory.
In CentOS there are no files in /etc/httpd since they are all in subdirectories.
/etc/httpd/conf/httpd.conf is main configuration file, and there are additional configuration files in /etc/httpd/conf.d and /etc/httpd/conf.modules.d directories which are included by main file.
To see in browser that Apache is installed, first start Apache using following command and then open http://localhost in browser:
sudo systemctl start httpd
You can also ask Apache status from commandline with command:
sudo systemctl status httpd
I'm not completely sure about what that outputs in all cases, but seems to output at least Loaded: loaded on second line when Apache has been installed.
The unix which command shows the full path of a command.
In order to check if Apache is installed, just check from the console if any of possible apache command exists :
> which apache || which httpd || which apache2
If there is no answer, Apache is not available...
I wonder if you can help.
I am running the following versions:
OS: SMP Debian 3.2.81-1 x86_64
uWSGI: uWSGI 2.0.11.2
I installed uWSGI manually, as I want to use a specific version. Using the following commands: -
apt-get install build-essential psmisc python-dev libxml2 libxml2-dev python-setuptools
cd /opt/
wget http://projects.unbit.it/downloads/uwsgi-2.0.11.2.tar.gz
tar -zxvf uwsgi-2.0.11.2.tar.gz
mv uwsgi-2.0.11.2/ uwsgi/
cd uwsgi/
python setup.py install
I am trying to replicate the setup on another server that the project is already working on in a live environment (I am essentially setting up a test server environment).
The original server has uWSGI running on boot. To figure out how this is happening, I used
htops
I've been able to identify that uWSGI is running on the existing server with a set of command line switches. I've managed to track down the script that initialises uWSGI with these switches in the init.d folder.
I copied this script to my test server, and ran it using
service script.sh start
After various troubleshooting, mainly involving permissions on socket folders etc, now when I run this script it starts, and if I run htop I can see uWSGI is running and it has the exact same command switches I need.
I thought simply putting the script in init.d and giving it execute permission
chmod +x script.sh
Would be enough so that it starts when the server is switched on... but this appears to not be the case. Because when I issue
reboot
At the terminal, the terminal reboots but when I go into htops, and check for the uWSGI process it is not running.
If however directly after reboot I issue the following command
service script.sh start
The service starts just fine, and I can once again see it in htops.
Research online lead me to the suggestion that I should try to set the script to run automatically using chkconfig. I installed chkconfig using
apt-get chkconfig
and then ran the following command
chkconfig --list
I noticed that all the runtime levels where set to off for the script I am trying to get to execute on boot.
I ran the following command
chkconfig /etc/init.d/script.sh on
And now when I check the script runtime switches with chkconfig, it shows me the following output for my script:
script.sh 0:off 1:off 2:on 3:on 4:on 5:on 6:off
However when I reboot the uWSGI process is still not starting.
Yet if I simply type
service script.sh start
At the terminal the service runs ok, and uWSGI runs fine.
How can I set the script to run when the server restarts?
Edit:
Further research on the live server that is working fine, has determined that it does not appear to be using systemd to launch uWSGI on startup. I logged into the live server and while there is a
/etc/systemd
folder, it has just one folder in it system and no files. The system folder has the following files in it:
multi-user.target.wants sockets.target.wants syslog.service
So there does not appear to be anything uWSGI related in here.
Also what is making me think this is likely something to do with the
/etc/init.d
folder, is that when I run htop and examine the running services (or daemons) not quite sure of the correct terminology in linux. uWSGI is showing in here as running with a signature of command line switches, and the script I have found in /etc/init.d has this exact uWSGI command and same signature of switches, so I'm fairly convinced this is the part of the system that is starting the uWSGI daemon , I just can't figure out what I need to do , to get it to run apart from copying the same file to /etc/init.d on the new server and giving it execute permission.
The OS of the live server is :
SMP Debian 3.2.73-2+deb7u1 x86_64
and the OS I am running on the new server is
SMP Debian 3.2.81-1 x86_64
So they seem fairly similiar? Although I'm not sure how significant the 8 incremements in the least significant digits in the version number is.
On the new server there is no /etc/systemd folder , and on the live server there is a /etc/systemd as explained above. So it does appear to have been installed seperately to the main OS install (as I have a later version of Debian and it wasn't installed on my system by default) - so perhaps there is something related to systemd that is causing the script to start on the live server, but I'm not too sure.
Jessie
In the recent Debian (Jessie) the initv scripts do not work the way they did. And given your kernel version you are not running a Debian that uses initv scripts. The current Debian uses systemd and scripts in /etc/rc.d are run by compatibility features of systemd (the service command is now a systemd command that tries to behave like the old initv command).
You have two options:
Add a line calling the script from /etc/rc.local:
/etc/rc.d/script.sh
This is a rather dirty fix, since it depends on another compatibility feature of systemd. Also, the location of the script does not matter anymore.
Write a full systemd service for uwsgi (this is what I do, and what is recommended by the uwsgi documentation). You would need to create a file called /etc/systemd/system/uwsgi.service with a content similar to:
[Unit]
Description=uwsgi emperor
After=rsyslog.service
[Service]
PIDFile=/run/uwsgi-emperor.pid
ExecStart=/bin/uwsgi --ini /etc/uwsgi/emperor.ini
ExecReload=/bin/uwsgi --reload /run/uwsgi-emperor.pid
ExecStop=/bin/uwsgi --stop /run/uwsgi-emperor.pid
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
I use the emperor mode (which is also the mode recommended by uwsgi for use with systemd), although it is possible to hack it to run a single process uwsgi (see further reading below).
You will also need to enable the service to be used by the multi-user.target, which will run at boot. You need to perform this as root:
systemctl enable uwsgi.service
And uwsgi will start with the next boot (it will not start straight away, to make it start you need systemctl start uwsgi.service).
Further reading:
The Arch linux wiki about systemd is very thorough
The Debian wiki on systemd is good, but outdated in some places (notably, it tells you that you need to install it which is not the case in Jessie)
Weezy
You're mixing things up a little there: chkconfig is a script of the RedHat family of OSes. Making it work for Debian was not easy in the past, and I do not believe it is easy to do so now.
Weezy still uses the initv rc.d folders alright, for each runlevel one rc.d folder:
/etc/rc.d/rc0.d/
/etc/rc.d/rc1.d/
/etc/rc.d/rc2.d/
/etc/rc.d/rc3.d/
/etc/rc.d/rc4.d/
/etc/rc.d/rc5.d/
/etc/rc.d/rc6.d/
You can check the runlevel you are in with the (appropriately named) runlevel command. Then you need to check whether there is a softlink to the script in the correct /etc/rc.d/rc*.d folder. If there is no softlink to the script you need to add it with something of the lines:
ln -s /etc/rc.d/init.d/script.dh /etc/rc.d/rc$(runlevel | cut -d ' ' -f 2).d/script.sh
And that is almost all about how initv scripts work. If you are going into runlevel 2 when the machine boots (I believe that's the default on Debian), what init performs is simply service <script> start for every file in /etc/rc.d/rc2.d.
Problem:
I have CentOS 7 Linux VM with cifs installed. I added a mount point using autofs where the whole idea was to automatically mount a network share every time VM boots. However, when I run this command:
ls /mnt/vmshare/trinity
I get
ls: cannot access /mnt/vmshare/trinity: No such file or directory
Workaround:
What I'm having to do is run this command after each reboot
/sbin/service autofs start
Then I can see the files in trinity.
Maybe autofs does not even start by default on reboot. How to make sure autofs starts on reboot? Or in general, how to solve my problem above?
Thanks so much!
for CentOS 7:
systemctl enable autofs
check with:
systemctl is-enabled autofs
I don't recall exactly what I used to figure this one out, but maybe it will get you on the right track.
I did some research on this a few years ago, and I believe the term you're looking for is "persistent mount."
You'll need to create or find the local mount point for your network directory. For instance, in mine.. /media/disco/disknamehere.../.../Share
This must be added to /etc/fstab with the correct options in place. Sorry I couldn't be more help.
Try to run
chkconfig autofs on
that will enable autofs service to start on boot.
I am running Centos 6.5 (Kernel Linux jspring 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux) and I was wondering if someone could assist - Im installing Fail2Ban through yum however when I go to run it I am getting the following error;
service fail2ban start
Starting fail2ban: ERROR Directory /var/run/fail2ban exists but not accessible for writing
[FAILED]
If anyone could advise me how to fix this that would be great.
Thanks!
There are several things that could be causing this.
First make sure the permissions and ownership are correct as the other answers state. The directory permissions should be drwxr-xr-x (a.k.a. 644) and it should be owned by root:root.
Now make sure you are attempting to run the start command with the proper authority. If service fail2ban start does not work, try sudo service fail2ban start. Using sudo is recommended but you could also switch to or login as the root user.
You may also want to reboot after you get it running and then run sudo service fail2ban status to make sure it successfully started up again.
You need to set the appropriate rights on the mentioned directory:
drwxr-xr-x root:root
You should set the permissions like this: chmod -R 644 /var/run/fail2ban/
As people have mentioned, this is clearly a permissions issue. I'm not sure if this applies to your version but fail2ban in 2018 has a client, run as:
sudo fail2ban-client start
(or restart or status). It must be run as sudo though.
As is documented in the official commands list here the command fail2ban start <jail> is clearly used to start jails and not to start fail2ban. So you completely missunderstood it's usage.
Try to first stop and then quickly start again the jail sshd that is enabled by default:
fail2ban-client stop sshd
fail2ban-client start sshd
Hey! It works!