I'm running Manjaro 18.1.0 and using the interception-caps2esc 0.1.3-2 plugin via AUR. My /etc/udevmon.yaml and /etc/systemd/system/udevmon.service are setup as described in the answer here. This has been working fine for months, but has now suddenly stopped working. I tried re-booting.
I'm quite stumped as to what's caused the difference. I notice, however, that running systemctl status udevmon.service returns
● udevmon.service - udevmon
Loaded: loaded (/etc/systemd/system/udevmon.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Sun 2019-10-13 16:34:06 CEST; 20min ago
Main PID: 7749 (code=dumped, signal=SEGV)
Oct 13 16:34:06 my-pc systemd[1]: Started udevmon.
Oct 13 16:34:06 my-pc systemd[1]: udevmon.service: Main process exited, code=dumped, status=11/SEGV
Oct 13 16:34:06 my-pc systemd[1]: udevmon.service: Failed with result 'core-dump'.
which I suppose is relevant. systemctl reset-failed does not help, and my understanding of systemd and the workings of caps2esc is too limited to identify relevant next steps for solving this problem or for troubleshooting.
My question: What steps can I take to resolve or further troubleshoot this issue?
I experienced the same issue and was able to resolve it by reinstalling interception-tools and interception-caps2esc (I'm using yay):
yay -S interception-tools
yay -S interception-caps2esc
And then restarting the udevmon service:
sudo systemctl restart udevmon.service
I can't explain why it broke or why this fixes it but it did for me.
Related
I am fairly new to docker and linux and I am having issue installing and running docker. My system config are
Operating System: Debian GNU/Linux 9 (stretch)
Kernel: Linux 5.13.9.rsk.1-amd64
Architecture: x86-64
I followed official instructions. When I run sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin, it had following error
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package docker-compose-plugin
I also tried, sudo systemctl restart docker, which output
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
Then I tried, sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-06-10 23:04:17 CST; 1min 41s ago
Docs: https://docs.docker.com
Main PID: 33878 (code=exited, status=1/FAILURE)
CPU: 111ms
Jun 10 23:04:15 6969-69-696 systemd[1]: docker.service: Unit entered failed state.
Jun 10 23:04:15 6969-69-696 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 10 23:04:17 6969-69-696 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jun 10 23:04:17 6969-69-696 systemd[1]: Stopped Docker Application Container Engine.
Jun 10 23:04:17 6969-69-696 systemd[1]: docker.service: Start request repeated too quickly.
Jun 10 23:04:17 6969-69-696 systemd[1]: Failed to start Docker Application Container Engine.
Jun 10 23:04:17 6969-69-696 systemd[1]: docker.service: Unit entered failed state.
Jun 10 23:04:17 6969-69-696 systemd[1]: docker.service: Failed with result 'exit-code'.
journalctl -xe had a very big output which does not seem related to docker (most outputs are Failed to get GPU/HCU type or it is not a GPU/HCU host which would make sense since it is a remote host machine).
I have looked at other similar questions on stackoverflow and those fixes have not worked for me so far. Please let me know if I should be doing something differently or how can I fix it. I really need to get docker running for my work. Any help will be appreciated.
So after installing mongodb in my Ubuntu, I tried to run "mongo", but it said,
MongoDB shell version v4.4.1
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
So I enabled mongod service and started it, then ran the command,
sudo systemctl status mongod
And It said,
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-09-17 00:23:08 +06; 8min ago
Docs: https://docs.mongodb.org/manual
Process: 45414 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, sta>
Main PID: 45414 (code=exited, status=1/FAILURE)
Sep 17 00:23:08 john systemd[1]: Started MongoDB Database Server.
Sep 17 00:23:08 john mongod[45414]: about to fork child process, waiting until server is>
Sep 17 00:23:08 john mongod[45427]: forked process: 45428
Sep 17 00:23:08 john mongod[45414]: ERROR: child process failed, exited with error numbe>
Sep 17 00:23:08 john mongod[45414]: To see additional information in this output, start >
Sep 17 00:23:08 john systemd[1]: mongod.service: Main process exited, code=exited, statu>
Sep 17 00:23:08 john systemd[1]: mongod.service: Failed with result 'exit-code'.
And I can't run the mongodb shell. What should I do?
I came across this issue yesterday and I was able to resolve it by:
removing the mongod.lockfile.
running the config fork command.
remove .lock file:
sudo rm /usr/local/var/mongodb/mongod.lock
Run:
mongod --config /usr/local/etc/mongod.conf --fork.
and use the mongo command again.
mongod need to be running before you can run mongo without that error.
p.s. here is the answer for others who stumble upon original question from the title
I also got that same error , I think this may happened because of some updation in our PC (like .NET framework updation something)
then I uninstalled and reinstalled MongoDB again and its working
You have yo go /etc , modify the mongod.conf, because:
"By default, MongoDB launches with bindIp set to 127.0.0.1,", which binds to the localhost network interface. This means that the mongod can only accept connections from clients that are running on the same machine.
Then could sudo nano mongod.conf and change 127.0.0.1 to 0.0.0.0
You must restart mongo.
Create a folder data in root C: directory.
Create another folder db inside data folder.
Now run mongod in cmd in path
C:\Program Files\MongoDB\Server\5.0\bin>mongo
Don't close this command prompt.
Open another cmd in same path
C:\ProgramFiles\MongoDB\Server\5.0\bin>mongo
Run mongo command.
Now it will connect.
I am new to puppet, it was working just fine till some days ago. I got back to start the server today, running
sudo /etc/init.d/puppetserver start
I then got
Starting puppetserver (via systemctl): puppetserver.serviceJob for puppetserver.service failed. See 'systemctl status puppetserver.service' and 'journalctl -xn' for details.
and journalctl -xn throws
No journal files were found.
failed!
When I check the logs in /var/log/puppetlabs/puppetserver, I get no specific informations. I also noticed that the puppet command that was located in /opt/puppetlabs/puppet/bin but I can't find the bin folder anymore.
Does anyone has an idea ?
EDIT :
Here's the output of systemctl status puppetserver.service
puppetserver.service - LSB: puppetserver
Loaded: loaded (/etc/init.d/puppetserver)
Active: failed (Result: exit-code) since Fri 2017-11-10 10:20:13 UTC; 3h 54min ago
Process: 5490 ExecStart=/etc/init.d/puppetserver start (code=exited, status=2)
and
My app.service file's [Service] part is the following:-
[Service]
Type=forking
Restart=no
IgnoreSIGPIPE=no
GuessMainPID=no
ExecStart=/opt/app/appl_init.d start
ExecStop=/opt/app/appl_init.d stop
TimeoutSec=infinity
After which I installed the app, and the file is correctly copied to /usr/lib/systemd/system/app.service.
I have run systemctl daemon-reload, but it seems to have no effect on the start up time! It fails just as I run systemctl start app or systemctl reload app.service with the following error:-
Job for app.service failed because a fatal signal was delivered to the control process. See "systemctl status app.service" and "journalctl -xe" for details
Output of systemctl status app is:-
● app.service - ApplicationTest
Loaded: loaded (/opt/app/appl_init.d; enabled; vendor preset: disabled)
Active: failed (Result: signal) since Tue 2017-03-21 01:55:22 EDT; 1min 4s ago
Docs: man:app(8)
Process: 4126 ExecStart=/opt/app/appl_init.d start (code=killed, signal=KILL)
Mar 21 01:55:22 centosvm systemd[1]: Starting ApplicationTest...
Mar 21 01:55:22 centosvm systemd[1]: app.service start operation timed out. Terminating.
Mar 21 01:55:22 centosvm systemd[1]: app.service stop-final-sigterm timed out. Killing.
Mar 21 01:55:22 centosvm systemd[1]: app.service: control process exited, code=killed status=9
Mar 21 01:55:22 centosvm systemd[1]: Failed to start ApplicationTest.
Mar 21 01:55:22 centosvm systemd[1]: Unit app.service entered failed state.
Mar 21 01:55:22 centosvm systemd[1]: app.service failed.
Another queer thing that I noticed is when I run systemctl show app.service -p TimeoutSec, I don't get any result; it's blank?
I have tried doing a systemctl reboot, but still, no dice.
Of course, when I change the value to anything else like TimeoutSec=5min, then it works perfectly fine. But I really need this application to take up infinity.
Where am I going wrong?
TimeoutSec=0 fixed the problem.
Apparently, if you are using a version of systemd older than 229, you will need to use 0 instead of infinity to disable the timeout.
I'm using Vagrant to deploy to Ubuntu Linux and try to start a tomcat8 service.
Tomcat 8 was installed by apt-get install tomcat8.
When using the service tomcat8 start command, I got the following error:
Job for tomcat8.service failed. See "systemctl status tomcat8.service" and "journalctl -xe" for details.
Then I tracked the systemctl status tomcat8.service, found that:
? tomcat8.service - LSB: Start Tomcat.
Loaded: loaded (/etc/init.d/tomcat8)
Active: failed (Result: exit-code) since Mon 2016-03-28 09:44:17 GMT; 5s ago
Docs: man:systemd-sysv-generator(8)
Process: 884 ExecStop=/etc/init.d/tomcat8 stop (code=exited, status=0/SUCCESS)
Process: 1312 ExecStart=/etc/init.d/tomcat8 start (code=exited, status=1/FAILURE)
Mar 28 09:44:12 vagrant-ubuntu-trusty systemd[1]: Starting LSB: Start Tomcat....
Mar 28 09:44:12 vagrant-ubuntu-trusty tomcat8[1312]: * Starting Tomcat servlet engine tomcat8
Mar 28 09:44:17 vagrant-ubuntu-trusty tomcat8[1312]: ...fail!
Mar 28 09:44:17 vagrant-ubuntu-trusty systemd[1]: tomcat8.service: control process exited, code=exited status=1
Mar 28 09:44:17 vagrant-ubuntu-trusty systemd[1]: Failed to start LSB: Start Tomcat..
Mar 28 09:44:17 vagrant-ubuntu-trusty systemd[1]: Unit tomcat8.service entered failed state.
Mar 28 09:44:17 vagrant-ubuntu-trusty systemd[1]: tomcat8.service failed.
I'm unsure of how to proceed to get my Tomcat 8 service running.
This issue can be caused when the tomcat8 server runs under user tomcat8 and the catalina.out was created by root.
To solve this, delete catalina.out and let tomcat8 recreate it.
This could be related to this bug. Recent versions of Java deprecate the use of endorsed directories and fail if one is specified, but Tomcat8 specifies one even if it doesn't exist. Check the log in /var/log/tomcat8/ as suggested in the comments to your question to see whether this is indeed the source of your problem. If it is, you can either wait for the bug to be fixed or try the updated catalina.sh file suggested in the linked bug report.
What I did to solve the issue :
Process: 1312 ExecStart=/etc/init.d/tomcat8 start (code=exited, status=1/FAILURE)
See tomcat's dependencies
dpkg -s tomcat8-common|grep Depends
and the system java version
javar -version
And try to sort out things with the appropriate java version if things don't match.
If that's not the case, continue :
Never bad to start with
sudo apt-get update
Check eventual running tomcat processes
ps aux | grep java
Test the pid you're going to kill
pgrep -f tomcat
Targeted action
sudo pkill -f tomcat
Start removing by typing sudo apt-get remove tomcat8-tab.
You might find :
tomcat8-common tomcat8-user
Complete remove with ( I don't know which of these below is the most appropriate to run )
sudo apt-get purge tomcat8 or
sudo apt-get --auto-remove purge tomcat8 or just
sudo apt-get remove tomcat8
You can also
sudo apt-get autoremove
Carefully sudo rm -r folders like
/var/lib/tomcat*
/usr/share/tomcat*
/etc/tomcat*
Reboot
sudo systemctl reboot
When back on track install
sudo apt-get install tomcat8
Check how's going
sudo systemctl status tomcat8.service
sudo /usr/share/tomcat8/bin/version.sh
Better ?
Verify your tomcat8 configuration file in /etc/default/tomcat8. See if there are badly configured variables.
For me, this error was caused by the following variables in my configuration file:
-Djava.awt.headless=true -XX:+UseConcMarkSweepGC
JAVA_OPTS="-Djava.awt.headless=true -Xss4m -Xmx2g -XX:+UseConcMarkSweepGC"
I commented and it worked.