I am using the following recipe to perform log rotation:
bash 'adding_logrotate_for_consul' do
code <<-EOH
echo "" >> /etc/logrotate.conf
echo "/tmp/output.log {" >> /etc/logrotate.conf
echo -e "\tsize 20M" >> /etc/logrotate.conf
echo -e "\tcreate 700 root root" >> /etc/logrotate.conf
echo -e "\trotate 3" >> /etc/logrotate.conf
echo "}" >> /etc/logrotate.conf
EOH
end
The above code runs completely fine and it adds the following entry in /etc/logrotate.conf
/tmp/output.log {
size 20M
create 700 root root
rotate 3
}
However, after the above entry is added using chef, I have to manually run the following command on the node everytime:
logrotate -s /var/log/logstatus /etc/logrotate.conf
How can I include the above command in chef recipe so that log rotation can be performed using chef recipe after the file size reached 20M??
I think there are a few things here you are doing less than ideally. Let me go one by one:
I am using the following recipe to perform log rotation:
bash 'adding_logrotate_for_consul' do
code ...
This is not a good way of creating a logrotate entry. Chef (and other orchestration tools) have a very nice feature called idempotency. Its basic meaning is that you can run the same recipe many times, and it will only converge itself, or "apply" itself, if it is needed. A problem you will have with the way you are doing this is that your block of code will run EVERY time you run you cookbook - so after 5 runs, you will have 5 identical entries in /etc/logrotate.conf. That wouldn't be very good...
Thankfully there are much better ways of doing what you want to do. Are you familiar with the Chef Supermarket? This is a site where you can find many pre-made cookbooks to extend the functionality of your cookbook. So, for your current problem, you could for example use the cookbook called logrotate. How can you use it in your own cookbook? You need to include it by adding the following to these files:
BERKSFILE
source 'https://supermarket.chef.io'
metadata
METADATA.RB
depends 'logrotate'
Now your cookbook is aware of the 'logrotate' cookbook, and you can use the Chef resources it provides. So you could create the following recipe:
logrotate_app 'app_with_logs' do
path '/tmp/output.log'
options ['size 20M']
rotate 3
create '700 root adm'
end
Now, when you run your recipe, this will create the logrotate entry, only if it doesn't already exist. Handy! (note, this might create the entry in /etc/logrotate.d/ instead of /etc/logratate.conf. This is the preferred way of adding a logrotate entry).
On to the next part.
How can I include the above command in chef recipe so that log
rotation can be performed using chef recipe after the file size
reached 20M??
Logrotate as a program runs automatically, once a day. When it runs, it will check all entries in /etc/logrotate.conf and /etc/logrotate.d/*, and run them if they fulfil the requirements (in this case, size of 20M). However, since it only runs once a day, depending how fast your log grows, it could be much bigger than 20M by the time it gets evaluated and rotated!
So now, you have two options. Either, one, let logrotate work as it is expected to, and rotate your log once a day if, when it looks at it, its over 20M in size. Or two, you could do what you want to do and run that command in a Chef recipe, although this would not be a good way of doing it. But, for completeness, I will tell you how you can run a command with Chef. But remember! This will, again, NOT be idempotent. Thus why it is not a good way of doing this!
To run a command from a Chef recipe, use the execute resource. For example:
execute 'rotate_my_log_because_I_cant_wait_24h' do
command 'logrotate -s /var/log/logstatus /etc/logrotate.conf'
end
That will run that command on your node. But again, this is not the recommended way of doing this.
Related
How would I force my server to download a specific file every time the server is started up? For example;
I have a plugin coded, and I want to make it so it will automatically reinstall that plugin to the server every time it is started up, but I want it to check if the server has the plugin already.
So, for example the plugin would be called Test.jar
I want to check to see if the server contains the file "Test.jar", if it does, do nothing, else install the plugin.
Also, if that above is possible, how would I check to see if it is the correct file, rather than just a random filed named "Test.jar" to get around that check?
If it helps, I use the Pterodactyl panel, so maybe a script can be added to the Startup Command?
I also have all the information necessary to hook a discord bot up to the panel, which I've started doing, but I can't find a good API for javascript.
I tried using websockets, but I can not seem to find any documentation to assist me on this, I also tried asking for support on the Pterodactyl support discord, and searching on the API documentation, but I can't figure it out.
For the main part of your question, the native solution is using Cron daemon named crond by configuring /etc/crontab.
By this, you may schedule desired commands/scripts to be run regularly or once per reboot.
So, to keep /etc/crontab configuration simple and clear, I'd suggest creating a proper bash script (make it executable) and then configuring it in crontab to be run on server boot.
Example: Add the following line to your /etc/crontab file:
Note: This assumes you have root privileges otherwise read more about cron command to schedule it for the user.
#reboot root /script.sh
Since you didn't detail the reproducing steps related to all parts of the goals you're trying to reach (as #Fravadona also stated in the comments) please feel free to use the following script as a hint/example and develop it to cover your needs.
#!/bin/bash
## USER DEFINED VARIABLES
# Jar file path
# Relevant path can be passed to this script as the first argument
# Default is: '/Test.jar'
_jarFilePath="${1:- /Test.jar}"
# Jar file sha256sum hash
# Intended to provide this value manually for now, hash of file can be identified by running 'sha256sum' command passing the file path to this command:
# sha256sum <PATH_TO_FILE>
# but may be automated by a relevant logic.
_jarFileHash="<PUT_FILE_HASH_HERE>"
## SCRIPT BODY
# Check if the Jar file is available and has the currect Hash then do nothing (exit)
# otherwise continue running some commands to download/install it.
if [ -f "${_jarFilePath}" ] && [ "$(sha256sum ${_jarFilePath} | awk '{print $1}')" == "${_jarFileHash}" ] ;then
exit 0
fi
# Put the commands that works for you to install the file after this line.
# Example: wget -O ${_jarFilePath} <URL_TO_DOWNLOAD_FILE_FROM>
Multiple scripts are running in my Linux server which are generating huge data and I realise that it will eat all my 500GB of storage size in next 2-5 days and scripts require 10 more days to finish the process means they need more space. So most likely I am going to have a space issue problem and I will have to restart the entire process again.
Process is like this -
script1.sh content is like below
"calling an api" > /tmp/output1.txt
script2.sh content is like below
"calling an api" > /tmp/output2.txt
Executed like this -
nohup ./script1.sh & ### this create file in /tmp/output1.txt
nohup ./script2.sh & ### this create file in /tmp/output2.txt
My understand initially was, if I will follow below steps, it should work --
when scripts are running with nohup in background execute this command -
mv /tmp/output1.txt /tmp/output1.txt_bkp; touch /tmp/output1.txt
And then transfer this file /tmp/output1.txt_bkp to another server via ftp and remove it after that to get space on server and script will keep on writing in /tmp/output1.txt file.
But this assumption was wrong and script is keep on writing in /tmp/output1.txt_bkp file. I think script is writing based on inode number that is why it is keep on writing in old file.
Now the question is how to avoid space issue without killing/restart scripts?
Essentially what you're trying to do is pull a file out from under a script that's actively writing into it. I'm not sure how nohup would let you do that.
May I suggest a different approach?
Why don't you move an x number of lines from your /tmp/output[x].txt to /tmp/output[x].txt_bkp? You can do so without much trouble while your script is running and dumping stuff into /tmp/output[x].txt. That way you can free up space by shrinking your output[x] files.
Try this as a test. Open 2 terminals (or use screen) to your Linux box. Make sure both are in the same directory. Run this command in one of your terminals:
for line in `seq 1 2000000`; do echo $line >> output1.txt; done
And then run this command in the other before the first one finishes:
head -1000 output1.txt > output1.txt_bkp && sed -i '1,+999d' output1.txt
Here is what's going to happen. The first command will start producing a file that looks like this:
1
2
3
...
2000000
The second command will chop off the first 1000 lines of output1.txt and put them into output1.txt_bkp and it will do so WHILE the file is being generated.
Afterwards, look inside output1.txt and output1.txt_bkp, you will see that the former looks like this:
1001
1002
1003
1004
...
2000000
While the latter will have the first 1000 lines. You can do the same exact thing with your logs.
A word of caution: Based on your description, your box is under a heavy load from all that dumping. This may negatively impact the process outlined above.
Im trying to make a logrotate execute daily. So far, i tried putting it inside the cron.daily:
/usr/sbin/logrotate -s /var/lib/logrotate/logrotate.status /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit 0
And my logrotate.conf contains:
/var/lib/mysql/queries.log {
size 1k
copytruncate
rotate 4
}
When i try to execute logrotate -f /etc/logrotatetest.conf, it works. but the daily cron does not execute. So i creating an SH file containing the above code then executed by cronjob: * * * * * /home/rotate.sh 2>/home/rotate.log 1>&2
i did * * * * * for testing but it does not work.
rotate.sh contains: logrotate -f /etc/logrotate.conf
I dont know why it isn't executed by cronjob :(
PS. The log file, SH File, and the logrotate.conf has a '777' access rights.
How you asked this question is probably why you haven't received more of a response to it.
First, the /etc/cron.daily/logrotate script you described is the default logrotate rotate script, it should have already been there when logrotate was installed. Why you had to put it there, I do not know but that already sounds like either your logrotate or cron is not setup properly. Is this file executable? Executable bits on the logs and conf don't really matter, they don't have executable code inside of them.
Second, /etc/logrotate.conf is supposed to be the base schema for logrotate, the 'default' to be parsed before any other directories specified by an "include" parameter inside that file (there may also be schemas for other specific logs here, too). If you do not have this file setup properly, there will be no base schema for logrotate to utilize. You need to show us the complete contents of that file. Have you tried debugging logrotate's execution with the -d flag?
Third, what is /etc/logrotatetest.conf? You mention it once without saying what it is and say that it works. How are we supposed to know anything about it? Either post or describe the contents in relation to the original .conf file.
Fourth, how are you employing cronjobs? Is anacron involved? You directly relate the issue to cron but then give us no information on how you have cron setup.
If you want a serious answer to this question, I'd go back and make some edits because at this point people that could help have very little to build on here.
I have created a list of cron jobs (see below) using sudo crontab -e in the root crontab file. When I run the commands individually on the command line, they work fine, however none of the jobs are run by cron. Any help would be appreciated. Do I need to add something else into the crontab file?
48 * * * * sudo gzip -k /calcservergc.log.*
49 * * * * for file in /calcservergc.log.*.gz; do sudo mv $file $(hostname).${file:1}; done
50 * * * * sudo rm $(hostname)..log..gz
sudo
The sudo command may not work in a crontab. Generally you need a password to run sudo but there might be a way to have it run without a password when running in a cron job. This would not be recommended however to attempt.
cron
You'll need to run the cron as a user that has access to do what you need to accomplish. Cron runs with a short list of specific paths. By default that list is pretty short. On a linux box I use the path is /sbin:/usr/sbin:/bin:/usr/bin.
Also, the paths need to be more specific. Cron doesn't run as a normal user so you have to be more specific with paths and output of those commands.
For instance, on the first command, where will the gzip file be placed?
logrotate
It looks like you're trying to zip a log file, then move log files, then remove old log files - this is exactly what logrotate accomplishes. It would be worth installing. Logrotate solves problems like the log file being opened when you run this command - generally the process that has the log file opened doesn't lose the file handle even if you rename it so the log continues to be written to even after you move it. It also handles the problem of keeping an archive of the recent log files, like syslog.1.gz, syslog.2.gz, syslog.x.gz or as many back as you have storage space for or want to keep for posterity.
Summary
Don't use sudo in cron
Be specific in paths when running commands in cron
Use logrotate to accomplish this specific task in your question
I don't have 50 points of reputation so can't comment on your question, so I'll try to say it in one shot.
I detect a possible problem with your 3 commands each called at one minute apparts. Let's say the first operation takes more than one minute to run (shouldn't happen but in theory it could), your second call won't work or worst, it could work on half the data). You don't want to loose time by, lets say, put 5 minutes delay between your commands, that would be a lost of time.
What you could do is create a shell script in which you put the 3 commands. This way it will prevent your operations to "crash". So just put your 3 commands in a script shell and they will be executed one after the other.
Then put your file in a place like /bin (you can also create a symbolic link with ln -s) and call your script with cron. (Be careful with the paths in the script shell)
Now, for the sudo problem... well even if you put it in a shell script, you would still need to pass your sudo password, and cron runs in the background so you won't be able to enter your password.
You could try two solutions. Change the rights on the containing folder where your files are stored (by using chmod -r 777 or chmod 755 on the folder) or move/copy your files in a directory where you have access to read and write.
I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.