Silence sysstat-collect.service? - linux

The sar tool collects load values every 10 minutes on my CentOS Linux release 8.5.2111
via the service sysstat-collect.service. It fills /var/log/messages with:
Dec 26 12:50:04 node systemd[1]: Starting system activity accounting tool...
Dec 26 12:50:04 node systemd[1]: sysstat-collect.service: Succeeded.
Dec 26 12:50:04 node systemd[1]: Started system activity accounting tool
Every 10 minutes. That's annoying, I want to silence it. Is it possible??
Thanks in advance

You can use logrotate to select what logs you want to keep or delete

Related

`at` only runs when I add a new job

As the title says, I'm having a weird behaviour with at:
It put my jobs on queue, and run then correctly, but only after another job is scheduled:
This is the situation, when I add ad new job then job 17 gets executed.
atd service is running fine. My system is: Linux 5.10.98-1-MANJARO.
PS: I already tried telling it to not email me -M, or using absolute/relative paths, etc... jobs are executed, only when atd is "triggered" or gets awaked by scheduling a new job or restarting the systemd service.
PPS: don't know if this could help, its the log for atd.service when I check the status
feb 22 09:12:01 sant-nuc systemd[1]: Starting Deferred execution scheduler...
feb 22 09:12:01 sant-nuc systemd[1]: Started Deferred execution scheduler.
feb 22 14:16:44 sant-nuc atd[157517]: pam_unix(atd:session): session opened for user santiago(uid=1000) by (uid=2)
feb 22 14:16:44 sant-nuc atd[157517]: pam_env(atd:setcred): deprecated reading of user environment enabled
feb 22 14:16:44 sant-nuc atd[157517]: pam_env(atd:setcred): deprecated reading of user environment enabled
feb 22 14:16:44 sant-nuc atd[157517]: pam_unix(atd:session): session closed for user santiago
This is a bug in at 3.2.4:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004972
Apparently this will be fixed in version 3.2.5. It's not available yet through the archlinux repo so I have downgraded to 3.2.2, which does not have this issue.

Elastisearch Enabling Remote Connection - Crashes AFTER Change*

I just installed filebeat, logstash, kibana and elasticsearch all running smoothly just to trial this product out for additional monthly reports/monitoring and noticed every time I try to change the "/etc/elasticsearch/elasticsearch.yml" config file for remote web access it'll basically crash the service every time I make the change.
Just want to say I'm new to the forum and this product, and my end goal for this question is to figure out how to allow remote connections to access elastisearch as I guinea pig and test without crashing elasticsearch.
For reference here is the error code when I run the 'sudo systemctl status elasticsearch' query:
Dec 30 07:27:37 ubuntu systemd[1]: Starting Elasticsearch...
Dec 30 07:27:52 ubuntu systemd-entrypoint[4067]: ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
Dec 30 07:27:52 ubuntu systemd-entrypoint[4067]: bootstrap check failure [1] of [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.se>
Dec 30 07:27:52 ubuntu systemd-entrypoint[4067]: ERROR: Elasticsearch did not exit normally - check the logs at /var/log/elasticsearch/elasticsearch.log
Dec 30 07:27:53 ubuntu systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/CONFIG
Dec 30 07:27:53 ubuntu systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Dec 30 07:27:53 ubuntu systemd[1]: Failed to start Elasticsearch.
Any help on this is greatly appreciated!

Servers with same timezone but different time

I have 3 servers, 2 on AWS and one on Digital Ocean, and the timezone for all is set to CDT. But when I check the current time on all 3 by using the date command via command line, none of them matches.
Server1: Wed Jun 12 23:36:01 CDT 2019
Server2: Wed Jun 12 23:45:51 CDT 2019
Server3: Wed Jun 12 23:38:39 CDT 2019
Could anyone please suggest what needs to be done here? Thanks.
Since you have not explicitly said that you have ntp running on them, you'll need to install that. Once that is installed and set up properly, you should show the same exact time on all of them.

couchdb.service: Failed with result 'start-limit-hit'

After I installed couchdb, I could get the welcome information
$ curl localhost:5984
{"couchdb":"Welcome","version":"2.1.2","features":["scheduler"],"vendor":{"name":"The Apache Software Foundation"}}
But I can't check the status by systemctl
$ systemctl status couchdb.service
● couchdb.service
Loaded: not-found (Reason: No such file or directory)
Active: failed (Result: start-limit-hit) since 一 2018-12-03 14:52:14 CST; 6min ago
Main PID: 30946 (code=killed, signal=USR2)
12月 03 14:52:14 gpuhuawei systemd[1]: couchdb.service: Unit entered failed state.
12月 03 14:52:14 gpuhuawei systemd[1]: couchdb.service: Failed with result 'signal'.
12月 03 14:52:14 gpuhuawei systemd[1]: couchdb.service: Service hold-off time over, scheduling restart.
12月 03 14:52:14 gpuhuawei systemd[1]: Stopped Apache CouchDB.
12月 03 14:52:14 gpuhuawei systemd[1]: couchdb.service: Start request repeated too quickly.
12月 03 14:52:14 gpuhuawei systemd[1]: Failed to start Apache CouchDB.
12月 03 14:52:14 gpuhuawei systemd[1]: couchdb.service: Unit entered failed state.
12月 03 14:52:14 gpuhuawei systemd[1]: couchdb.service: Failed with result 'start-limit-hit'.
12月 03 14:53:53 gpuhuawei systemd[1]: Stopped Apache CouchDB.
12月 03 14:53:53 gpuhuawei systemd[1]: Stopped Apache CouchDB.
When I run couchdb by command line, I got
$ couchdb
{"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/etc/couchdb/default.ini","/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,eacces}}},[{couch_server_sup,start_server,1,[{file,"couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,273}]}]}}}}}},[{couch,start,0,[{file,"couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
[1] 2288 user-defined signal 2 couchdb
My work enviroment
$ uname -a
Linux gpuhuawei 4.15.0-34-generic #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
This is a bit late, but the "start-limit-hit" message is a red herring. I have seen something very similar with a Moodle installation using MySQL and it's actually saying that you (or the service start process) have tried to restart the database too many times or too soon after a failed attempt to start. Basically, this start-limit-hit message is saying "stop trying to do the same thing and expecting different results".
The actual issue will be further up in the syslog. Unhelpfully, service status does not return enough lines of the error messages to actually see what is wrong. Try a service start, and go and look in the actual syslog, and you will see the series of start attempts and a line just above each one hopefully will tell you the actual issue. In my case here, you can see that the problem is the mount point containing the database is missing - thanks, Azure. For one attempt of service start, it tries to start 5 times in quick succession, failing each time because the data dir was not mounted, and on the sixth it fails with the start-limit-hit.
Always back up your data/ and etc/ directories prior to upgrading CouchDB.
We recommend that you overwrite your etc/default.ini file with the version provided by the new release. New defaults sometimes contain mandatory changes to enable default functionality. Always places your customization in etc/local.ini or any etc/local.d/*.ini file.
(I was followed this and it worked)
https://docs.couchdb.org/en/3.0.0/install/upgrading.html

CentOS 7 - boot order needs to be changed in order for sge to start automatically

It seems like sge tries start before lustre is mounted when the server boots, which brings an error to start automatically when it reboots.
Can somebody tell me how to change the order when it boots, so sge starts after lustre is mounted?
Error message from the log:
Aug 12 11:46:21 dragen1 systemd: Configuration file /usr/lib/systemd/system/sge_execd.service is marked executable. Please remove executable permission bits. Proceeding anyway.
Aug 12 11:46:40 dragen1 sge_execd: error: SGE_ROOT directory "/cm/shared/apps/sge/2011.11p1" doesn't exist
Aug 12 11:46:40 dragen1 systemd: sge_execd.service: control process exited, code=exited status=1
Aug 12 11:46:40 dragen1 systemd: Unit sge_execd.service entered failed state.
Aug 12 11:46:40 dragen1 systemd: sge_execd.service failed
I added in the following under [Unit] from the sge service
RequiresMountsFor=(Mount Point)
This fixed the problem.

Resources