I guys, I have a problem, I had been configure some basic script to send files to AWS in windows to backup the files for 8 hours with the task scheduler but now I have to do it in Linux (Centos y Ubuntu), the script is basically this " aws sync "PC folder" "AWS bucket" "and is launched with crontab but how can I run this script only for 8 hours then make it stop it automatically, How can i do this? please Help
If it's the only aws process running on the machine, it could be as simple as writing a script that runs
pkill aws
and schedule it 8 hours after the start time.
Note this searches for any process named aws and kills it, so if other aws commands are running they would be killed too. Killing by process name isn't the most reliable, usually killing by PID is better but I would need to know more details about your script to know how to get the PID, and where to store it.
Related
I come with a Vagrant script that pulls down an bionic 18 which then runs an ansible playbook which generates two EC2s ubuntu 20.04 which successfully run though every "task:" I assign. I am able to run everything I want in a largely automated download and execution for a publisher subscriber method. Here is the issue: I can run my .sh and .py scripts manually and the systems works, but when I use the ansible methods I must be doing something wrong much like these solutions point to:
Shell command works on direct hosts but fail on Ansible
ansible run command on remote host in background
https://superuser.com/questions/870871/run-a-remote-script-application-in-detached-mode-in-ansible
What I want to do is simply correct the issue with this, and run it in the background.
- name: Start Zookeeper
shell: sudo /usr/local/kafka-server/bin/zookeeper-server-start.sh /usr/local/kafka-server/config/zookeeper.properties </dev/null >/dev/null 2>&1 &
- name: Sleep for 30 seconds and continue with play
wait_for:
timeout: 15
- name: Start Kafka broker
shell: sudo /usr/local/kafka-server/bin/kafka-server-start.sh /home/ubuntu/usr/local/kafka-server/config/server.properties </dev/null >/dev/null 2>&1 &
I have tried it with just a single "&" at the end as well as passing in explicit calls to my user account "ubuntu". I've used the "become: yes". I really don't want to use a daemon especially since others seem to have used this successfully before.
I do want to note that a glaring sign to you that I can't seem to think through is that it hangs when I don't include the &, but if I do include the & it just outright fails, which made me think it was running, but the script won't proceed because these are listener processes.
# - name: Start Zookeeper
# become: yes
# script: /usr/local/kafka-server/bin/zookeeper-server-start.sh
# args:
# chdir: /usr/local/kafka-server/config/zookeeper.properties
This failed, and I'd rather not create another script to copy over the directories and localize it if there is a simple solution to the first block of code.
Multiple ways to skin this cat, but I'd rather just have my mistake on the shell ansible command fixed, and I don't see it.
As explained in the answers written in your third link:
https://superuser.com/questions/870871/run-a-remote-script-application-in-detached-mode-in-ansible
This happens because the script process is a child process from the shell spawned by ansible. To keep the process running after ansible has finished you would need to disown this child process.
The proper way to do this is configuring the software (zookeeper in your case) as a service. There are plenty of examples for this such as:
https://askubuntu.com/questions/979498/how-to-start-a-zookeeper-daemon-after-booting-under-specific-user-in-ubuntu-serv
Once you have configured it as a service, you can start it or stop it using ansible service module.
I have node server that I want to restart whenever it stopped. For this case I setup system cron on ubuntu server to execute a simple bash script that will track node server every minute and log the server status. Now this cron trigger this bash script and logs relevant status every minute but node server doesn't execute(Using simple linux command I can check if node server running or not). When manually running that bash script node server starts but something happening when cron executes that script. I am trying to fix this meanwhile any help will be appreciated.
Thanks
instead of doing this with cron i think you have to use spervisor in order to keep the process running check this supervisor website
So I am relatively new to Centos, version 6.2. I have a service that needs to be mnonitored as a cron job, and if it stops needs to be restarted. I have a few ideas on how to monitor it, but when it comes to getting it restarted thats when I get stuck. I also know the PiD of the service I want to monitor.
You can use supervise for this: http://cr.yp.to/daemontools/supervise.html
Put it in your crontab to launch on system start:
#reboot supervise foo
I have used various snippets of code to build a system which
listens to a port for incoming TCP data (using a perl script), writes this data to a log file.
calls and runs a PHP script to consume the log file and write it to an RDS MySQL DB
I have a GPS device configured to send the data to the elastic IP of my AWS EC2 Server
It works fine, and when i run via SSH
perl portlistener.pl
it does it's job fine, happily working away.
The only way I can stop the script running is by closing the terminal window, ending my SSH session. What I need to do, is keep it running at all times, and to implement a start, stop and restart facility. Do i need to create a daemon?
I know PHP, but until now have never worked with Perl. I'm also not that familiar with command line, other than installing updates, navigating and editing single files etc.
Thanks in advance for any help, or for pointing me in the right direction.
Solved it I think!
installed CPAN http://www.thegeekstuff.com/2008/09/how-to-install-perl-modules-manually-and-using-cpan-command/
Using CPAN, installed Deamon::Control
Then created a new program as below (portlistener_launcher.pl), and ran it as SU.
#!/usr/bin/perl
use strict;
use warnings;
use Daemon::Control;
$ENV{PHP_FCGI_CHILDREN} = 10;
$ENV{PHP_FCGI_MAX_REQUESTS} = 1000;
Daemon::Control->new({
name => 'portlistener',
program => 'perl /home/ec2-user/portlistener/portlistener.pl',
fork => 2,
pid_file => '/var/run/portlistener.pid',
stdout_file => '/var/log/portlistener.log',
stderr_file => '/var/log/portlistener.log',
})->run;
There's probably a neater way of doing it, but it seems to work, and I can stop/start it like so:
perl portlistener_launcher.pl start
If the terminal window is the only task, you can use the nohup command, e.g.
http://linux.101hacks.com/unix/nohup-command/
To terminate the listener you can kill an appropriate running process or processes.
An implementation of daemon does not ensure its permanent running. It can crash or might be killed from someone. To guarantee the permanent daemon running you must implement a 24x7 monitoring of this daemon and automatic restarting of it.
I need to have some processes start when the computer boots and run forever. These are not actually daemons, ie. they do not fork or demonize but they do not exit. I am currently using cron to start them using the #reboot directive like this:
#reboot /path/to/myProcess >>/logs/myProcess.log
Could this cause any problems with the cron daemon? I thought I could try nohup ... & to detach the new process from cron, like this:
#reboot nohup /path/to/myProcess >>/logs/myProcess.log &
Is this required at all?
Is there some other, preferred method to start processes at system boot? I know all Linux distributions provide config files and means to run a program as a service but I am looking for a method that is not Linux distribution specific.
http://www.somacon.com/p38.php
This article answers my question. It suggests that running daemons this way spawns two extra processes, a cron and a shell process, that live for as long as your daemon.
I tested this with linux and following the instructions I was able to get rid of the cron process but not the zombie shell process.