We are trying to start xserver as user using startx command, once startx start the X server we try to start the gnome-session on same display. But when we connect to the display using x11vnc it shows up all blank.
But with the same process if we start other desktop manager like: Mate, KDE those come up fine.
Setup details:
OS: centos 7.9
GPU: Quadro 4000
Gnome version: gnome-desktop3-3.28.2-2.el7.x86_64
Below are the steps to reproduce the issue:
Update /etc/pam.d/xserver to give permission to user to start X server.
1.1) comment line auth required pam_console.so
1.2) add line “auth required pam_permit.so”
Switch to non-root user.
run command startx in background.
From process find out the display id on which X server is started. ex: ps aux | grep “bin/X” | grep <userid>
export DISPLAY=<DISPLAY_ID>
run “gnome-session” command in background.
run “x11vnc” and connect through vnc client. For me Desktop is coming as blank.
But in step 6 if we try MATE or KDE, desktop comes up.
Any idea why it is happening?
Some snippet of gnome-session debug logs:
gnome-session-binary[45081]: DEBUG(+):
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of QT_IM_MODULE=ibus environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of XMODIFIERS=#im=ibus environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of XDG_MENU_PREFIX=gnome- environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: WARNING: Could not get session id for session. Check that logind is properly installed and pam_systemd is getting used at login.
gnome-session-binary[45081]: DEBUG(+): Using systemd for session tracking
gnome-session-binary[45081]: DEBUG(+): GsmManager: setting client store 0x8c0310
generating cookie with syscall
generating cookie with syscall
generating cookie with syscall
generating cookie with syscall
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of SESSION_MANAGER=local/unix:#/tmp/.ICE-unix/45081,unix/unix:/tmp/.ICE-unix/45081 environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): GsmXsmpServer: SESSION_MANAGER=local/unix:#/tmp/.ICE-unix/45081,unix/unix:/tmp/.ICE-unix/45081
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of GNOME_KEYRING_CONTROL=/users/rangesg/.cache/keyring-28YFK1 environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of SSH_AUTH_SOCK=/users/rangesg/.cache/keyring-28YFK1/ssh environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
Related
I deployed my app to aws EC2 so I need a way of always having this node app running no matter what if the server restart if the app crashes whatever it always need to restart the app.
the app needs to running always. so I use npm start & is that enough to restart my app?
I've tried to use systemD, but I had error while start the service I've created :
sudo systemctl start nameX.service
Job for nameX.service failed because of unavailable resources or
another system error. See "systemctl status ...service" and
"journalctl -xeu ....service" for details.
sudo systemctl status nameX.service
nameX.service: Scheduled restart job, restart > systemd[1]: Stopped
My Node Server. Failed to load environment file> systemd[1]:
...service: Failed to run 'start' task: No > systemd[1]: ...service:
Failed with result 'resources'. systemd[1]: Failed to start My Node
Server.
Rails: 6.0.3
Sidekiq: 6.1.2
Ruby 2.7.2
Running on AWS Amazon Linux 2
I'm running a fairly simply Sidekiq configuration on production, and using the boilerplate systemd/sidekiq.service file from the examples directory in the sidekiq repo.
I noticed that my workers can not run long jobs because they are killed every 1 minute or so. I was able to track down what's happening, and it appears that systemd is restarting sidekiq, even though it is successfully started. It appears that it never receives the message that the service started successfully, so systemd is killing the process.
Here are the logs:
sidekiq: 2021-06-01T23:30:56.510Z pid=24939 tid=gir INFO: Shutting down
sidekiq: 2021-06-01T23:30:56.511Z pid=24939 tid=4jxb INFO: Scheduler exiting...
systemd: Failed to start sidekiq.
systemd: Unit sidekiq.service entered failed state.
systemd: sidekiq.service failed.
sidekiq: 2021-06-01T23:30:56.513Z pid=24939 tid=gir INFO: Terminating quiet workers
sidekiq: 2021-06-01T23:30:56.513Z pid=24939 tid=4jvn INFO: Scheduler exiting...
sidekiq: 2021-06-01T23:30:57.015Z pid=24939 tid=gir INFO: Pausing to allow workers to finish...
sidekiq: 2021-06-01T23:30:57.516Z pid=24939 tid=gir INFO: Bye!
systemd: sidekiq.service holdoff time over, scheduling restart.
systemd: Starting sidekiq...
sidekiq: 2021-06-01T23:30:58.991Z pid=32046 tid=fs6 INFO: Enabling systemd notification integration
sidekiq: 2021-06-01T23:31:04.475Z pid=32046 tid=fs6 INFO: Booting Sidekiq 6.1.2 with redis options {:url=>"redis://******"}
sidekiq: 2021-06-01T23:31:08.869Z pid=32046 tid=fs6 INFO: Running in ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]
sidekiq: 2021-06-01T23:31:08.870Z pid=32046 tid=fs6 INFO: See LICENSE and the LGPL-3.0 for licensing details.
systemd: sidekiq.service: Got notification message from PID 32046, but reception only permitted for main PID 31981
Following these messages, the sidekiq worker will successfully perform the jobs from the queue for about 1 minute before it's restarted again. This cycle continues forever.
I've tried modifying the sidekiq.service file a number of different ways, but nothing seems to do the trick. In particular, this line from the logs seems to indicate there's an issue sending the signal to the right process ID, that sidekiq correctly started up: systemd: sidekiq.service: Got notification message from PID 32046, but reception only permitted for main PID 31981
Any ideas on how I can ensure that systemd accurately knows when a job succeeds/fails to start?
Here is my current systemd/sidekiq.service file:
#
# This file tells systemd how to run Sidekiq as a 24/7 long-running daemon.
#
# Customize this file based on your bundler location, app directory, etc.
# Customize and copy this into /usr/lib/systemd/system (CentOS) or /lib/systemd/system (Ubuntu).
# Then run:
# - systemctl enable sidekiq
# - systemctl {start,stop,restart} sidekiq
#
# This file corresponds to a single Sidekiq process. Add multiple copies
# to run multiple processes (sidekiq-1, sidekiq-2, etc).
#
# Use `journalctl -u sidekiq -rn 100` to view the last 100 lines of log output.
#
[Unit]
Description=sidekiq
# start us only once the network and logging subsystems are available,
# consider adding redis-server.service if Redis is local and systemd-managed.
After=syslog.target network.target
# See these pages for lots of options:
#
# https://www.freedesktop.org/software/systemd/man/systemd.service.html
# https://www.freedesktop.org/software/systemd/man/systemd.exec.html
#
# THOSE PAGES ARE CRITICAL FOR ANY LINUX DEVOPS WORK; read them multiple
# times! systemd is a critical tool for all developers to know and understand.
#
[Service]
#
# !!!! !!!! !!!!
#
# As of v6.0.6, Sidekiq automatically supports systemd's `Type=notify` and watchdog service
# monitoring. If you are using an earlier version of Sidekiq, change this to `Type=simple`
# and remove the `WatchdogSec` line.
#
# !!!! !!!! !!!!
#
Type=simple
# If your Sidekiq process locks up, systemd's watchdog will restart it within seconds.
#WatchdogSec=10
EnvironmentFile=/opt/elasticbeanstalk/deployment/custom_env_var
WorkingDirectory=/var/app/current
# If you use rbenv:
# ExecStart=/bin/bash -lc 'exec /home/deploy/.rbenv/shims/bundle exec sidekiq -e production'
# If you use the system's ruby:
# ExecStart=/usr/local/bin/bundle exec sidekiq -e production
# If you use rvm in production without gemset and your ruby version is 2.6.5
# ExecStart=/home/deploy/.rvm/gems/ruby-2.6.5/wrappers/bundle exec sidekiq -e production
# If you use rvm in production wit gemset and your ruby version is 2.6.5
ExecStart=/bin/bash -lc 'cd /var/app/current; bundle exec sidekiq -e production -r /var/app/current -C /var/app/current/config/sidekiq.yml'
# Use `systemctl kill -s TSTP sidekiq` to quiet the Sidekiq process
# !!! Change this to your deploy user account !!!
User=root
Group=root
UMask=0002
# Greatly reduce Ruby memory fragmentation and heap usage
# https://www.mikeperham.com/2018/04/25/taming-rails-memory-bloat/
Environment=MALLOC_ARENA_MAX=2
# if we crash, restart
RestartSec=1
Restart=on-failure
# output goes to /var/log/syslog (Ubuntu) or /var/log/messages (CentOS)
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
Change ExecStart to:
ExecStart=/direct/path/to/bundle exec sidekiq -e production
Everything else in that line appears superfluous.
Maybe this work in your case:
Type=notify
Notify=all # or "exec"
The jetty on our linux server is not installed as a service as we have multiple jetty servers on different ports. And we use command./jetty.sh stop and ./jetty.sh start to stop and start jetty.
However, when I add sudo to the command, the server never stop/start successfully. When I run sudo ./jetty.sh stop, it shows
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
and the server was not stopped.
When I run sudo ./jetty.sh start, it shows
Starting Jetty: FAILED Tue Apr 23 23:07:15 CST 2019
How could this happen? From my understanding. Using sudo gives you more power and privilege to run commands. If you can successfully execute without sudo, then the command should never fail with sudo, since it only grants superuser privilege.
As a user it uses $HOME.
As root it uses system paths.
The error you got ..
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
... means that there was a bad pid file sitting around for a process that no longer exists.
Short answer, the processing is different if you are root (a service) vs a user (just an application).
I am trying to start my dockerd daemon by this command - dockerd &
Then i start getting the error as below -
ERRO[0036] libcontainerd: failed to receive event from containerd: rpc error: code = 12 desc = unknown service types.API
This keeps rolling again and again and i am unable to start any container after that. If i close the session and open a new session, i could see docker ps is accessible. But i am unable to start any container. While starting the container I am getting error -
docker run hello-world
docker: Error response from daemon: unknown service types.API. ERRO[0000] error waiting for container: context canceled
Please let me know if any logs are needed.
Why do you start the docker daemon using dockerd & and not systemctl start docker.service? This is probably the cause of your problem.
In order to start the daemon at boot, you need to run systemctl enable docker.service. See Getting Started with Containers.
Note that the kernel for Red Hat Enterprise Linux 6 only supports a limited subset of the functionality needed for container support, and I don't think anyone tests either the daemon or container images on that operating system version.
I pulled the image from public repo. The problem is when I attach and exit out of the container, it will reboot the host machine also. I'm not sure what is the problem here.
Logs : [ when restart the container]
[root#localhost ~]# docker restart 714fed06f99f
714fed06f99f
[root#localhost ~]# Write failed: Broken pipe
<-= DISCONNECTED (PRESS <ENTER> TO RECONNECT) (Fri Feb 20 16:44:35 2015)
Another Log: [ when run the "exit" command]
[root#4e53f038d880 ~]# exit
exit
Write failed: Broken pipe
<-= DISCONNECTED (PRESS <ENTER> TO RECONNECT) (Fri Feb 20 16:48:42 2015)
Do you mean the container stops when you exit from attach?
This will happen, as you're "attaching" to the main process. If you send the main process a SIGKILL (Ctrl-c), it will stop, causing the container to exit. You can use the special Docker code Ctrl-p Ctrl-q to exit the attach without stopping the container.
See http://docs.docker.com/reference/commandline/cli/#attach for more details.