`at` only runs when I add a new job - linux

As the title says, I'm having a weird behaviour with at:
It put my jobs on queue, and run then correctly, but only after another job is scheduled:
This is the situation, when I add ad new job then job 17 gets executed.
atd service is running fine. My system is: Linux 5.10.98-1-MANJARO.
PS: I already tried telling it to not email me -M, or using absolute/relative paths, etc... jobs are executed, only when atd is "triggered" or gets awaked by scheduling a new job or restarting the systemd service.
PPS: don't know if this could help, its the log for atd.service when I check the status
feb 22 09:12:01 sant-nuc systemd[1]: Starting Deferred execution scheduler...
feb 22 09:12:01 sant-nuc systemd[1]: Started Deferred execution scheduler.
feb 22 14:16:44 sant-nuc atd[157517]: pam_unix(atd:session): session opened for user santiago(uid=1000) by (uid=2)
feb 22 14:16:44 sant-nuc atd[157517]: pam_env(atd:setcred): deprecated reading of user environment enabled
feb 22 14:16:44 sant-nuc atd[157517]: pam_env(atd:setcred): deprecated reading of user environment enabled
feb 22 14:16:44 sant-nuc atd[157517]: pam_unix(atd:session): session closed for user santiago

This is a bug in at 3.2.4:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004972
Apparently this will be fixed in version 3.2.5. It's not available yet through the archlinux repo so I have downgraded to 3.2.2, which does not have this issue.

Related

Elastisearch Enabling Remote Connection - Crashes AFTER Change*

I just installed filebeat, logstash, kibana and elasticsearch all running smoothly just to trial this product out for additional monthly reports/monitoring and noticed every time I try to change the "/etc/elasticsearch/elasticsearch.yml" config file for remote web access it'll basically crash the service every time I make the change.
Just want to say I'm new to the forum and this product, and my end goal for this question is to figure out how to allow remote connections to access elastisearch as I guinea pig and test without crashing elasticsearch.
For reference here is the error code when I run the 'sudo systemctl status elasticsearch' query:
Dec 30 07:27:37 ubuntu systemd[1]: Starting Elasticsearch...
Dec 30 07:27:52 ubuntu systemd-entrypoint[4067]: ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
Dec 30 07:27:52 ubuntu systemd-entrypoint[4067]: bootstrap check failure [1] of [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.se>
Dec 30 07:27:52 ubuntu systemd-entrypoint[4067]: ERROR: Elasticsearch did not exit normally - check the logs at /var/log/elasticsearch/elasticsearch.log
Dec 30 07:27:53 ubuntu systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/CONFIG
Dec 30 07:27:53 ubuntu systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Dec 30 07:27:53 ubuntu systemd[1]: Failed to start Elasticsearch.
Any help on this is greatly appreciated!

Silence sysstat-collect.service?

The sar tool collects load values every 10 minutes on my CentOS Linux release 8.5.2111
via the service sysstat-collect.service. It fills /var/log/messages with:
Dec 26 12:50:04 node systemd[1]: Starting system activity accounting tool...
Dec 26 12:50:04 node systemd[1]: sysstat-collect.service: Succeeded.
Dec 26 12:50:04 node systemd[1]: Started system activity accounting tool
Every 10 minutes. That's annoying, I want to silence it. Is it possible??
Thanks in advance
You can use logrotate to select what logs you want to keep or delete

Systemd restarts my process which is not dead

I have the current systemd service /etc/systemd/system/getty#tty1.service.d/override.conf:
[Service]
ExecStart=
ExecStart=-/home/auto/script.sh
Type=simple
StandardInput=tty
StandardOutput=tty
The point is, user turn on the computer and can manage few stuff on the computer and didnt need to log in.
Systemd starts the scripts it works fine. But after few minutes systemd restart "script.sh" for no reason. I think the problem is "script.sh" starts some child process and systemd does not like it.
After a restart I can find these lines in syslog:
Sep 25 12:33:32 hostname systemd[1]: getty#tty1.service: Service has no hold-off time, scheduling restart.
Sep 25 12:33:32 hostname systemd[1]: getty#tty1.service: Scheduled restart job, restart counter is at 1.
Sep 25 12:33:32 hostname systemd[1]: Stopped Getty on tty1.
Sep 25 12:33:32 hostname systemd[1]: getty#tty1.service: Found left-over process 1711 (docker) in control group while starting unit. Ignoring.
Sep 25 12:33:32 hostname systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
I tried a lot of things like Type=forking or RestartSec=86400s but Systemd still restart script.sh.
Any idea?
Best regards,

CentOS 7 - boot order needs to be changed in order for sge to start automatically

It seems like sge tries start before lustre is mounted when the server boots, which brings an error to start automatically when it reboots.
Can somebody tell me how to change the order when it boots, so sge starts after lustre is mounted?
Error message from the log:
Aug 12 11:46:21 dragen1 systemd: Configuration file /usr/lib/systemd/system/sge_execd.service is marked executable. Please remove executable permission bits. Proceeding anyway.
Aug 12 11:46:40 dragen1 sge_execd: error: SGE_ROOT directory "/cm/shared/apps/sge/2011.11p1" doesn't exist
Aug 12 11:46:40 dragen1 systemd: sge_execd.service: control process exited, code=exited status=1
Aug 12 11:46:40 dragen1 systemd: Unit sge_execd.service entered failed state.
Aug 12 11:46:40 dragen1 systemd: sge_execd.service failed
I added in the following under [Unit] from the sge service
RequiresMountsFor=(Mount Point)
This fixed the problem.

cron selinux security context issue

my system is fedora 23.
trying to run a cronjob that is blocked by selinux in /etc/crontab.
* * * * sun,mon,tue,wed,thu,fri,sat root DISPLAY=:0 eog $HOME/Pictures/somepic.jpg
context for crontab:
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0 2664 Jan 23 18:12 /etc/crontab
if i run selinux in permissive mode, the job runs every time.
here's the journal entry for crond in 'enforce mode':
-- Logs begin at Wed 2016-01-20 10:40:21 PST. --
Jan 23 18:25:01 localhost.localdomain CROND[20342]: (root) CMDOUT (/bin/sh: root: command not found)
Jan 23 18:25:25 localhost.localdomain crond[938]: (CRON) INFO (Shutting down)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 12% if used.)
Jan 23 18:25:25 localhost.localdomain crond[18645]: ((null)) Unauthorized SELinux context=system_u:system_r:system_cronjob_t:s0-s0:c0.c1023 file_context=unconfined_u:object_r:etc_t:s0 (/etc/crontab)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (root) FAILED (loading cron table)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (root) Unauthorized SELinux context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 file_context=unconfined_u:object_r:user_cron_spool_t:s0 (/var/spool/cron/root)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (root) FAILED (loading cron table)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (running with inotify support)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (#reboot jobs will be run at computer's startup.)
sebool settings:
cron_can_relabel --> off
cron_system_cronjob_use_shares --> off
cron_userdomain_transition --> on
fcron_crond --> off
I'm having the same issue. Had a crontab that worked great for a long time. Made an edit with crontab -e, and it stopped working. Tried as both root and normal user. Some searching around, this is a currently known bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1263328
I tried the workaround listed in comment #19. It's working fine.
Create file mycron.cil with content:
(allow unconfined_t user_cron_spool_t( file ( entrypoint)))
Then run:
semodule -i mycron.cil
Then restart cron:
systemctl restart crond.service
Comment 21 tells how to remove the workaround when a fix is issued.
Remove by running:
semodule -r mycron
and I assume restart cron again.
You need to change the type of the cron file under var/spool/cron
Try this to apply files triggerd 'Unauthorized SELinux context' messages
# semanage fcontext -a -t user_cron_spool_t "/var/spool/cron(/.*)?"
# restorecon -R -vv /var/spool/cron
I was trying this for shutting down my rhel 7.3 server automatically at 11:00 pm daily through cron job in /etc/crontab.
I faced the similar issue and selinux did not allowed to run the job.
However I got the solution by creating a new crontab file in /etc/cron.d/ and the cron job successfully executed and shutdown the system on defined time in /etc/cron.d/crontab file.
I got solution from below RHEL page on point 24.1.2 Scheduling a Cron Job""
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-automating_system_tasks

Resources