I rotate my nginx access log files with logrotate an have the following config:
/var/www/logs/*.log {
daily
missingok
dateext
dateformat _%Y-%m-%d
dateyesterday
rotate 90
compress
delaycompress
compressext
notifempty
create 0640 www-data www-data
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
When cron runs that script for example at 5am all logs between 0am and 5am in the file will be rotated.
I want a log file for exactly one day from midnight to 11.59pm
Is there an opportunity to configure this?
To rotate the log at 11.59pm configure the /etc/crontab file as follow :
59 23 * * * <path_to_logrotate_command> -f <path_to_logrotate_schedule_file>
E.g. 59 23 * * * /usr/sbin/logrotate -f /home/foo/logrotate/rotate_nginx
Actually, it would be better to just change the schedule of cron.daily, to something like this:
$ grep daily /etc/crontab
59 23 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
Remembering that this will be valid to each script inside /etc/cron.daily:
$ ls /etc/cron.daily -l
total 48
-rwxr-xr-x 1 root root 376 nov 20 2017 apport
-rwxr-xr-x 1 root root 1478 abr 20 2018 apt-compat
-rwxr-xr-x 1 root root 355 dic 29 2017 bsdmainutils
-rwxr-xr-x 1 root root 1176 nov 2 2017 dpkg
-rwxr-xr-x 1 root root 372 ago 21 2017 logrotate
-rwxr-xr-x 1 root root 1065 abr 7 2018 man-db
-rwxr-xr-x 1 root root 539 jun 26 2018 mdadm
-rwxr-xr-x 1 root root 538 mar 1 2018 mlocate
-rwxr-xr-x 1 root root 249 ene 25 2018 passwd
-rwxr-xr-x 1 root root 3477 feb 21 2018 popularity-contest
-rwxr-xr-x 1 root root 246 mar 21 2018 ubuntu-advantage-tools
-rwxr-xr-x 1 root root 214 jun 27 2018 update-notifier-common
But if you need a specific schedule, just for a logrotate rule, the elected answer would be an option.
Related
I have a container which was restart 14 hours ago. The container is running since 7 weeks. I want to inspect the container logs during a certain interval. When i run below command, I see there is no output
docker container logs pg-connect --until 168h --since 288h
When i run below commands i only see logs since the container was restarted.
docker logs pg-connect
Any idea how to retrieve older logs for the container?
More info if helps
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f08fb6fb0fb kosta709/alpine-plus:0.0.2 "/connectors-restart…" 7 weeks ago Up 14 hours connectors-monitor
7e919a253a29 debezium/connect:1.2.3.Final "/docker-entrypoint.…" 7 weeks ago Up 14 hours pg-connect
>
>
> docker logs 7e919a253a29 -n 2
2022-08-26 06:37:10,878 INFO || WorkerSourceTask{id=relations-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2022-08-26 06:37:10,878 INFO || WorkerSourceTask{id=relations-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
> docker logs 7e919a253a29 |head
org.apache.kafka.common.KafkaException: Producer is closed forcefully.
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766)
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279)
at java.base/java.lang.Thread.run(Thread.java:834)
2022-08-24 16:13:06,567 ERROR || WorkerSourceTask{id=session-0} failed to send record to barclays.public.session: [org.apache.kafka.connect.runtime.WorkerSourceTask]
org.apache.kafka.common.KafkaException: Producer is closed forcefully.
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766)
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279)
>
> ls -lart /var/lib/docker/containers/7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1/
total 90720
drwx------ 2 root root 6 Jul 1 10:39 checkpoints
drwx--x--- 2 root root 6 Jul 1 10:39 mounts
drwx--x--- 4 root root 150 Jul 1 10:40 ..
-rw-r----- 1 root root 10000230 Aug 24 16:13 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.9
-rw-r----- 1 root root 10000163 Aug 24 16:13 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.8
-rw-r----- 1 root root 10000054 Aug 24 16:16 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.7
-rw-r----- 1 root root 10000147 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.6
-rw-r----- 1 root root 10000123 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.5
-rw-r----- 1 root root 10000019 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.4
-rw-r----- 1 root root 10000159 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.3
-rw-r----- 1 root root 10000045 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.2
-rw-r--r-- 1 root root 199 Aug 25 16:30 hosts
-rw-r--r-- 1 root root 68 Aug 25 16:30 resolv.conf
-rw-r--r-- 1 root root 25 Aug 25 16:30 hostname
-rw------- 1 root root 7205 Aug 25 16:30 config.v2.json
-rw-r--r-- 1 root root 1559 Aug 25 16:30 hostconfig.json
-rw-r----- 1 root root 10000085 Aug 25 16:31 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.1
drwx--x--- 4 root root 4096 Aug 25 16:31 .
-rw-r----- 1 root root 2843232 Aug 26 06:38 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log
As stated by [the official guide][1]:
The docker logs command batch-retrieves logs present at the time of execution.```
To solve this issue you should instrument the container software to log its output to a persistent (rotated if you want) log file.
[1]: https://docs.docker.com/engine/reference/commandline/logs/
I created a user which using the /bin/rbash, so it cannot execute some commands, like 'cd' and 'ls'. But it still can browser other directory when enter some path like '/bin/', then using tab the shell will show the files under 'bin'. And this user only allowed to login through serial port. How can I restrict the user only work in it's home dirctory, and not read other directories.
Doing a quick search I have found this couple of questions that I think may fit your requirements
Create ssh user which can only access home directory
Give user read/write access to only one directory
put
set disable-completion on
string in ~/.inputrc and restart your shell. it will disable completion at all.
this can solve my questions
It is possible to use chroot to implement a user that does not see other directories.
This might be quite crazy solution, and not recommended way to do it.
Create a script that makes chroot
#!/bin/sh
exec /usr/sbin/chroot /home/test /bin/sh
Use the script as login shell (/etc/passwd):
test:x:0:0:Linux User,,,:/:/usr/sbin/chrootsh.sh
Copy all needed files to home directory of the user. You need at least shell and libraries that are needed for the shell:
~ # ls -lR /home/test/
/home/test/:
total 2
drwxr-xr-x 2 root test 1024 Aug 21 13:54 bin
drwxr-xr-x 2 root test 1024 Aug 21 13:54 lib
/home/test/bin:
total 1776
-rwxr-xr-x 1 root test 908672 Aug 21 13:54 ls
-rwxr-xr-x 1 root test 908672 Aug 21 13:54 sh
/home/test/lib:
total 1972
-rwxr-xr-x 1 root test 134316 Aug 21 13:54 ld-linux.so.3
-rwxr-xr-x 1 root test 1242640 Aug 21 13:54 libc.so.6
-rwxr-xr-x 1 root test 640480 Aug 21 13:54 libm.so.6
~ #
Ready. Then login as the user:
~ # su - test
/ # pwd
/
/ # ls -lR /
/:
total 2
drwxr-xr-x 2 0 1000 1024 Aug 21 13:54 bin
drwxr-xr-x 2 0 1000 1024 Aug 21 13:54 lib
/bin:
total 1776
-rwxr-xr-x 1 0 1000 908672 Aug 21 13:54 ls
-rwxr-xr-x 1 0 1000 908672 Aug 21 13:54 sh
/lib:
total 1972
-rwxr-xr-x 1 0 1000 134316 Aug 21 13:54 ld-linux.so.3
-rwxr-xr-x 1 0 1000 1242640 Aug 21 13:54 libc.so.6
-rwxr-xr-x 1 0 1000 640480 Aug 21 13:54 libm.so.6
/ #
I have a test application which runs every hour and uses unique log file at each execution. To clean the logs the following logrotate configuration has been set:
{
# Daily rotation with 1 week of backlog
daily
rotate 7
maxage 7
dateext
compress
}
The first day the log file compressed (which is ok) but a empty file is left and every other day that files is "emptied" and compressed. And that makes 6 files of every logfiles which fills the inodes table of the FS. Here's two examples :
-rw-r--r-- 1 root root 1752 Feb 11 01:36 J20190211013601_Status.txt-20190212.gz
-rw------- 1 root root 20 Feb 12 03:33 J20190211013601_Status.txt-20190213.gz
-rw------- 1 root root 20 Feb 13 03:37 J20190211013601_Status.txt-20190214.gz
-rw------- 1 root root 20 Feb 14 03:10 J20190211013601_Status.txt-20190215.gz
-rw------- 1 root root 20 Feb 15 03:12 J20190211013601_Status.txt-20190216.gz
-rw------- 1 root root 20 Feb 16 03:36 J20190211013601_Status.txt-20190217.gz
-rw------- 1 root root 20 Feb 17 03:44 J20190211013601_Status.txt-20190218.gz
-rw------- 1 root root 0 Feb 18 03:24 J20190211013601_Status.txt
-rw-r--r-- 1 root root 1752 Feb 11 02:36 J20190211023601_Status.txt-20190212.gz
-rw------- 1 root root 20 Feb 12 03:33 J20190211023601_Status.txt-20190213.gz
-rw------- 1 root root 20 Feb 13 03:37 J20190211023601_Status.txt-20190214.gz
-rw------- 1 root root 20 Feb 14 03:10 J20190211023601_Status.txt-20190215.gz
-rw------- 1 root root 20 Feb 15 03:12 J20190211023601_Status.txt-20190216.gz
-rw------- 1 root root 20 Feb 16 03:36 J20190211023601_Status.txt-20190217.gz
-rw------- 1 root root 20 Feb 17 03:44 J20190211023601_Status.txt-20190218.gz
-rw------- 1 root root 0 Feb 18 03:24 J20190211023601_Status.txt
How can i correct this, in order to delete the files after being compressed
Thanks for time and help,
This is how logrotate is supposed to function; your issue stems from the fact that you're using unique filenames every time your appplication runs.
When logrotate runs for the first time on each log, it's moving the log file from "J20190211023601_Status.txt" to "J20190211023601_Status.txt-20190212.gz" and then creating a new file named J20190211023601_Status.txt.
Logrotate has no inherent idea that those filenames are unique and thus will never be populated again; all it sees is a log it's rotated in the past, so it must be rotated again as per your configuration.
Your easiest solution here is to pass the nocreate directive for this logrotation; this will prevent that new log file from being created and subsequently rotated while still respecting the 7-day age limit on previously rotated files:
{
daily
maxage 7
dateext
compress
nocreate
}
I'm trying to run a bash script which should go in a specific directory.
The Problem is that the Script wont go in the newest Folder.
The Folder looks like:
root#raspberry ~/jdownloader/logs # ls -lha
total 104K
drwxr-xr-x 9 root root 4.0K Nov 30 11:52 .
drwxr-xr-x 14 root root 4.0K Nov 30 11:52 ..
drwxr-xr-x 2 root root 4.0K Nov 30 11:18 1479843940152_Tue, Nov 22, 2016 20.45 +0100
drwxr-xr-x 2 root root 4.0K Nov 30 11:21 1480501204839_Wed, Nov 30, 2016 11.20 +0100
drwxr-xr-x 2 root root 4.0K Nov 30 11:22 1480501242752_Wed, Nov 30, 2016 11.20 +0100
drwxr-xr-x 2 root root 4.0K Nov 30 11:30 1480501308071_Wed, Nov 30, 2016 11.21 +0100
drwxr-xr-x 2 root root 4.0K Nov 30 11:56 1480503116574_Wed, Nov 30, 2016 11.51 +0100
drwxr-xr-x 3 root root 12K Nov 23 11:25 extracting
drwxr-xr-x 2 root root 64K Nov 30 11:22 updatehistory
The Important Snippet from my Script is:
#!/bin/bash
declare dir=/var/log/scriptlog/jdstate
declare dir2=~/jdownloader/logs
NewFolder=`ls -rt1 ~/jdownloader/logs -I extracting -I updatehistory | tail -1 > /var/log/scriptlog/jdstate/newfolder.log`
OutputNewFolder=`head $dir/newfolder.log -n 1`
cd\ "\"$dir2/$OutputeNewFolder\""
When I try to run the script it shows me the error that it can't find the directory.
But when I copy/paste it, it will go to the Directory.
Any idea how it is possible to go to the directory?
For every one how is searching for an Answer:
this is my latest code snippet wich worked gladly for me
#!/bin/bash
declare dir=/var/log/scriptlog/jdstate
declare dir2=~/jdownloader/logs
NewFolder=`ls -rt1 ~/jdownloader/logs -I extracting -I updatehistory | tail -1 > /var/log/scriptlog/jdstate/newfolder.log`
OutputNewFolder=`head $dir/newfolder.log -n 1`
cd "$dir2/$OutputNewFolder"
I´m sure the can be improvements for the newest folder but htis works just finde for me
I need to write rotation of files shell script. I have following format data in a target directory(/backup/store_id/dates_folders)
Like :
cd /backup/
drwxr-xr-x 5 root root 4096 Mar 25 12:30 44
drwxr-xr-x 3 root root 4096 Mar 25 12:30 45
drwxr-xr-x 4 root root 4096 Mar 25 12:30 48
drwxr-xr-x 3 root root 4096 Mar 25 12:30 49
cd /backup/44/
drwxr-xr-x 2 root root 4096 Mar 25 12:30 22032014
drwxr-xr-x 2 root root 4096 Mar 25 12:30 23032014
drwxr-xr-x 2 root root 4096 Mar 25 12:30 24032014
drwxr-xr-x 2 root root 4096 Mar 25 12:30 25032014
now 44 (store_id) contain four dates folders. I want each store_id( like 44 folder) contain only three recent dates folder like 23,24,25 & 22 should be deleted. Please help me how to write in shell script. Please give me some hint
This should work:
cd /backup && ls -d */ | while read storeId; do rm -r `ls -r $storeId | tail -3`; done
I assume here that directory names are more important than their timestamps...
If that is not the case, you should use ls -tr instead of ls -r, to let ls command sort on timestamps...