Using flock to lock file for X time - linux

I am a Linux novice and currently developing a security system using Raspberry Pi 3 and MotionEye. To get notifications via e-mail, I am attempting to create a custom shell script that will send an e-mail if there is motion, lock for X minutes, then send another e-mail if there is still motion. However, I am having some difficulties.
I created a simple Python script named "send_email.py" using SMTP that works perfectly fine for sending e-mails when I execute it via command line.
The shell script (named "flock_email.sh") is where I run into troubles in a few regards:
Whenever I run flock_email.sh, it completely overwrites send_email.py. I have tried to change file permission so it is only executable by the user, but it still overwrites.
The flock command/function does not work as I intended or at all. I have looked all over the internet and tried multiple different codes, but none have worked. I have attached my various flock_email.sh scripts I have tried.
Not necessarily a problem, but I am a bit confused on what my "shebang" line should be. For flock_email.sh I have it as "!#/bin/bash", which I believe makes the script it executable, at least according to this. Do I still need to change the permissions via the command "chmod +x flock_email.sh"? The path is /home/pi, which is essentially the main directory of my Pi.
The different solutions I have tried:
In flock_email.sh, I have tried to directly change the file permissions to read-only instead of using flock, having it sleep, then changing the permissions back to allow execution of the file.
Multiple flock_email.sh implementations, as attached.
To summarize:
I need to execute send_email.py before locking the file flock_email.sh.
Once locked, it needs to stay locked for X time.
Does anyone have any pointers or suggestions? I have spent well over 15 hours tinkering with this and feel like I have gotten nowhere!
send_email.py:
#!/usr/bin/env
import smtplib
def send_email():
content = "Message I want to send to specified e-mail."
sender = "e-mail account that will send message"
pword = "password of sender"
receiver = "e-mail account that will receive message"
mail = smtplib.SMTP("smtp.gmail.com",587)
mail.ehlo
mail.starttls()
mail.login(sender,pword)
mail.sendmail(sender,receiver,content)
mail.close()
send_email()
flock_email.sh (1):
#!/bin/bash
(
python /home/pi/send_email.py
flock -e 200
sleep [time in seconds]
)
flock_email.sh (2):
#!/bin/bash
(
python /home/pi/send_email.py
exec 3>/home/pi/send_email.py
flock -x 3
sleep [time in seconds]
exec 3>&-
)
flock_email.sh (3):
#!/bin/bash
python /home/pi/send_email.py
chmod 444 /home/pi/send_email.py # modify to read only for all
sleep [time in seconds]
chmod 755 /home/pi/send_email.py # modify to rwx for owner, r-x for others

The reason why man flock and all posts say to use > is because you're supposed to use a dedicated lock file, typically in /var/lock:
#!/bin/bash
exec 3> /var/lock/motionmail
flock -ne 3 || exit
python /home/pi/send_email.py
sleep 3600
This additionally fixes you sending your email regardless, before you ever check the lock, and aborts new emails instead of queueing them all up.
You choose the lock file name based on the scope you want your lock to have:
If you only want one email per hour, you can use something like /var/lock/motionmail because there's just one per system.
If you want one email for each user per hour, you can use $HOME/.motionmail.lock because there's just one per user.
You can use /home/pi/send_email.py if you want with <, but this implies that you want one email per hour not only for each user, programming language and script copy, but also every time you hit save and replace the file with your editor*
* Editors differ in whether they replace or overwrite a file

Related

Suppressing output after SSH to another server

When I SSH to another server thare are some blurbs of text that always outputs when you log in. (wheather its SSH or just logging in to its own session)
"Authentification banner" is what it prints out every time i either scp a file over or SSH into it.
My code iterates thru a list of servers and sends a file, each time it does that it outputs a lot of text id like to suppress.
This code loops thru each server printing out what its doing.
for(my $j=0; $j < $#servName+1; $j++)
{
print "\n\nSending file: $fileToTransfer to \n$servName[$j]:$targetLocation\n\n";
my $sendCommand = `scp $fileToTransfer $servName[$j]:$targetLocation`;
print $sendCommand;
}
But then it comes out like this:
Sending file: /JacobsScripts/AddAlias.pl to
denamap2:/release/jscripts
====================================================
Welcome authorized users. This system is company
property and unauthorized access or use is prohibited
and may subject you to discipline, civil suit or
criminal prosecution. To the extent permitted by law,
system use and information may be monitored, recorded
or disclosed. Using this system constitutes your
consent to do so. You also agree to comply with applicable
company procedures for system use and the protection of
sensitive (including export controlled) data.
====================================================
Sending file: /JacobsScripts/AddAlias.pl to
denfpev1:/release/jscripts
====================================================
Welcome authorized users. This system is company
property and unauthorized access or use is prohibited
and may subject you to discipline, civil suit or
criminal prosecution. To the extent permitted by law,
system use and information may be monitored, recorded
or disclosed. Using this system constitutes your
consent to do so. You also agree to comply with applicable
company procedures for system use and the protection of
sensitive (including export controlled) data.
====================================================
I havent tried much, i saw a few forums that mention taking the output into a file and then delete it but idk if thatll work for my situation.
NOTE   This answer assumes that on the system in question the ssh/scp messages go to STDERR stream (or perhaps even directly to /dev/tty)†, like they do on some systems I test with -- thus the question.
If not, then ikegami's answer of course takes care of it: just don't print the captured STDOUT. But even in that case, I also think that all ways shown here are better for capturing output (except for the one involving the shell), specially when both streams are needed.
These prints can be suppressed by configuring the server, or perhaps via a .hushlogin file, but then that clearly depends on the server management.
Otherwise, yes you can redirect standard streams to files or, better yet, to variables, what makes the overall management easier.
Using IPC::Run
use IPC::Run qw(run);
my ($file, $servName, $targetLocation) = ...
my #cmd = ("scp", $file, $servName, $targetLocation);
run \#cmd, '1>', \my $out, '2>', \my $err;
# Or redirect both to one variable
# run \#cmd, '>&', \my $out_err;
This mighty and rounded library allows great control over the external processes it runs; it provides almost a mini shell.
Or using the far simpler, and very handy Capture::Tiny
use Capture::Tiny qw(capture);
...
my ($out, $err, $exit) = capture { system #cmd };
Here output can be merged using capture_merged. Working with this library is also clearly superior to builtins (qx, system, pipe-open).
In both cases then inspect $out and $err variables, what is far less cut-and-dry as error messages depend on your system. For some errors the library routines die/croak but for some others they don't but merely print to STDERR. It is probably more reliable to use other tools that libraries provide for detecting errors.
The ssh/scp "normal" (non-error) messages may print to either STDERR or STDOUT stream, or may even go directly to /dev/tty,† so can be mixed with error messages.
Given that the intent seems to be to intersperse these scp commands with other prints then I'd recommend either of these two ways over the others below.
Another option, which I consider least satisfactory overall, is to use the shell to redirect output in the command itself, either to separate files
my ($out_file, $err_file) = ...
system("#cmd 2> $err_file 1> $out_file" ) == 0
or die "system(#cmd...) error: $?"; # see "system" in perldoc
or, perhaps for convenience, both streams can go to one file
system("#cmd > $out_err_file 2>&1" ) == 0 or die $?;
Then inspect files for errors and remove if there is nothing remarkable. Or, shell redirections can be used like in the question but to capture all output
my $out_and_err = qx(#cmd 2>&1);
Then examine the (possibly multiline) variable for errors.
Or, instead of dealing with individual commands we can redirect streams themselves to files for a duration of a larger part of the program
use warnings;
use strict;
use feature 'say';
# Save filehandles ('dup' them) so to be able to reopen later
open my $saveout, ">&STDOUT" or die "Can't dup STDOUT: $!";
open my $saveerr, ">&STDERR" or die "Can't dup STDERR: $!";#]]
my ($outf, $errf) = qw(stdout.txt stderr.txt);
open *STDOUT, ">", $outf or die "Can't redirect STDOUT to $outf: $!";
open *STDERR, ">", $errf or die "Can't redirect STDERR to $errf: $!";
my ($file, $servName, $targetLocation) = ...
my #cmd = ("scp", $file, $servName, $targetLocation);
system(#cmd) == 0
or die "system(#cmd) error: $?"; # see "system" in perldoc
# Restore standard streams when needed for normal output
open STDOUT, '>&', $saveout or die "Can't reopen STDOUT: $!";
open STDERR, '>&', $saveerr or die "Can't reopen STDERR: $!";
# Examine what's in the files (errors?)
I use system instead of qx (operator form of backticks) since there is no need for output from scp. Most of this is covered in open, and search SO for specifics.
It'd be nice to be able to reopen streams to variables but that doesn't work here
† This is even prescribed ("allowed") by POSIX
/dev/tty
In each process, a synonym for the controlling terminal associated with the process group of that process, if any. It is useful for programs or shell procedures that wish to be sure of writing messages to or reading data from the terminal no matter how output has been redirected. It can also be used for applications that demand the name of a file for output, when typed output is desired and it is tiresome to find out what terminal is currently in use.
Courtesy of this superuser post, which has a substiantial discussion.
You are capturing the text, then printing it out using print $sendCommand;. You could simply remove that statement.

Trying to use SCP to copy multiple files from remote to local using script

So I'll start with the fact that I'm relatively new to linux scripting, so if I am going about it the wrong way, let me know.
I am creating a script that is meant to copy logs from many different hosts onto the local machine depending on user input.
One of the functions I am writing requires the use of scp. Each time you use the scp command at a particular remote host, you have to enter your password. So to save time for the user, I want to copy any file that the particular host may have on it that the user wants.
I know I can do this using scp user#Remoteipaddress:'directory/file1 directory/file2' local/machine/directory
I have it running (what I feel is too many, so if there is a better way let me know) a bunch of loops.
The portion with the scp command is my main issue. Code looks fine if I quote it and echo it. I can even copy and paste the echoed result and it will work, but if I let the script do it I receive bash: -c: line 0: unexpected EOF while looking for matching `''
edit: $app is a static number created in another portion of program
added a couple things that seemed to be missing. I'm trying to piece together from multiple areas of program without making it more messy than it already is
#assigns different remote host paths do array variable
until [ $scriptCounter == $app ]
do
scpScript[$scriptCounter]="user#${ipAddress[$ipCounter]}:'"
((++ipCounter))
((++scriptCounter))
done
#$app value gets set by another function - typically 3 if that matters
scpCount=0
DayCounter=0
ipScriptCounter=0
until [ $Count == $app ]
do
((++scpCount))
mkdir ~/MyDocuments/Logs/$3/app$scpCount
echo "Creating ~/MyDocuments/Logs/${3}/app${scpCount}"
#there is one log for each day, $totalDiffDays is the total amount of days
#$DayCounter is set and gets marked up everytime it goes through loop until
it matches total days
until [ $DayCounter == $totalDiffDays ]
do
scpPath[$DayCounter]="/var/log/docker/theLog*${datePath[$DayCounter]}*"
noSpaceSCP[$DayCounter]=${scpPath[$DayCounter]//[[:blank:]]/}
((++DayCounter))
done
fullSCPscript[$scpCount]="${scpScript[$ipScriptCounter]}${noSpaceSCP[*]}'"
#this portion I have an issue with.
scp ${fullSCPscript[$scpCount]} ~/MyDocuments/Logs/$3/app$scpCount
#this ups the array counter for my ipaddress array
((++ipScriptCounter))
#How im zeroing out the $DayCounter so it will run through again for other
nodes but with different IP address
until [ $DayCounter == "0" ]
do
((--DayCounter))
done
done
example output i get when I echo the line with the scp command
scp user#10.10.200.100:'/var/log/docker/theLog*2018-07-26* /var/log/docker/theLog*2018-07-27*' /home/mobaxterm/MyDocuments/Logs/care3/app1
I'm sorry that this looks messy, but overall I'm trying to build the directory that its grabbing the log from, and if there are multiple days, just add onto the scp command. I'm trying to do this as opposed to running a whole separate command to save the user from entering their password 5 times if they need 5 files. Instead they would only have to enter it once.

Linux Read - Timeout after x seconds *idle*

I have a (bash) script on a server that I have inherited the administration aspect of, and have recently discovered a flaw in the script that nobody has brought to my attention.
After discovering the issue, others have told me that it has been irritating them, but never told me (great...)
So, the script follows this concept
#!/bin/bash
function refreshscreen(){
# This function refreshes a "statistics screen"
...
echo "Enter command to override update"
read -t 10 variable
}
This script refreshes a statistics screen, and allows the user to stall the update in lieu of commands built into a case statement. However, the read times-out (read -t 10) after 10 seconds, regardless of if the user is typing.
Long story short, is there a way to prevent read from timing out if the user is actively typing a command? Best case scenario would be a "Time out of SEC idle/inactive seconds" opposed to just timeout after x seconds.
I have thought about running a background script at the end of the cycle before the read command pauses the screen to check for inactivity, but have not found a way to make that command work.
You can use read in a loop, reading one character at a time, and adding it to a final read string. This would then give the user some timeout amount of time per character rather than per command. Here's a sample function you might be able to incorporate into your script that shows what I'm talking about:
read_with_idle_timeout() {
local input=""
read -t 10 -N 1 variable
while [ ! -z $variable ]
do
input+=$variable
read -t 10 -N 1 variable
done
echo "Read: $input"
}
This will give the user 10 seconds to type each character. If they stop typing, you'll get as much of the command as they had started typing before the timeout occurred, and then your case statement can handle it. Perhaps you can store the final string in a global variable, or just put this code directly into your other function.
If you need more than one word, since read breaks on $IFS, you could call this function multiple times until you get all the input you're expecting.
I have searched for a simple solution that will do the following:
timeout after 10 seconds, if there is no user input at all
the user has infinite time to finish his answer if the first character was typed within the first 10 sec.
This can be implemented in two lines as follows:
read -N 1 -t 10 -p "What is your name? > " a
[ "$a" != "" ] && read b && echo "Your name is $a$b" || echo "(timeout)"
In case the user waits 10 sec before he enters the first character, the output will be:
What is your name? > (timeout)
If the user types the first character within 10 sec, he has unlimited time to finish this task. The output will look like follows:
What is your name? > Oliver
Your name is Oliver
Caveat: the first character is not editable, once it was typed, while all other characters can be edited (backspace and re-type). Any ideas for a simple solution?

Find out ID of 'at' job from within it

When I schedule a job with 'at' it is assigned an id, viz:
job 44 at 2014-01-28 17:30
When that job runs I would like to get at that id from within it. This is on Centos, FWIW. I have established that no environment variable contains the ID. When the Perl code in that job runs I would like it to be able to print the job ID (44 in this example).
Yes, I know that atq shows an = next to jobs that are executing, but there might be more than one of those at a time.
I could do something like pass a unique argument to the job when scheduling it, capture the ID, save that and the argument to a file somewhere, read that from the job. That's a lot of work I'd rather not go to if I don't have to, and it seems like this should be simple but I'm drawing a blank.
What follows is figured out by reading sources of at-3.14. The way at puts job id and the time when it is run into the file name should be similar for any version, but I haven't checked this.
To begin whith at encodes the job id and the time when a particular job should be run into the file name describing a job. The file name has format aJJJJJTTTTTTTT, where JJJJJ is 5 character hexadecimal string, the job id, and TTTTTTTT is an 8 character hexadecimal string, the time when the job should be run. The time is stored as seconds from the epoch.
At jobs are run by feeding a job description file as the standard input to sh -c. Fortunately the Linux kernel provides a symbolic link, /proc/self/fd/0, which will point to the standard input of the process currently being executed (play with ls -l /proc/self/fd/0 in case you need to assure yourself that this indeed is so).
A file describing a job has been deleted by the time a job is run. However, the file is still available for the kernel because it has been duplicated with dup(2) before being used as the standard input for a job. So, actually we are resolving a symbolic link to a file name which is not visible any more. In the perl script at the end we need to take this into account as readlink will return something like /foo/bar/baz (deleted) instead of /foo/bar/baz. And we're interested in just the file name which has all the information we need.
The reason why the symbolic link points to a deleted file is because at daemon unlinks the original before executing the job. Unlinking gets done only after creating a copy, a hard link, which begins with = instead of a. With this the at daemon tries to ensure there will be only one copy of a job running: the daemon will not execle(2), ie. it will bail out, should the link(2) fail. Because the original file has been subject to open(2) and dup(2) the inode is still there for the kernel to use because it still has hard links pointing to it.
After a fairly long and possibly confusing introduction, here is how to put it all together:
#!/usr/bin/perl
use strict;
use warnings;
my $job_file = readlink("/proc/self/fd/0");
if (index($job_file, " ") > 0) {
$job_file = substr($job_file, 0, index($job_file, " ") - 1);
}
my $tmp = substr($job_file, rindex($job_file, "/") + 1);
$tmp =~ s/^a([0-9a-f]{5})[0-9a-f]+/$1/;
my $job_id = hex($tmp);
if ($job_id > 0) {
printf("My AT job id is %d.\n", $job_id);
}
# end of file.

Passing key material to openssl commands

Is it safe to pass a key to the openssl command via the command line parameters in Linux? I know it nulls out the actual parameter, so it can't be viewed via /proc, but, even with that, is there some way to exploit that?
I have a python app that I want to use OpenSSL to do the encryption/description through stdin/stdout streaming in a subprocess, but I want to know my keys are safe.
Passing the credentials on the command line is not safe. It will result in your password being visible in the system's process listing - even if openssl erases it from the process listing as soon as it can, it'll be there for an instant.
openssl gives you a few ways to pass credentials in - the man page has a section called "PASS PHRASE ARGUMENTS", which documents all the ways you can pass credentials into openssl. I'll explain the relevant ones:
env:var
Lets you pass the credentials in an environment variable. This is better than using the process listing, because on Linux your process's environment isn't readable by other users by default - but this isn't necessarily true on other platforms.
The downside is that other processes running as the same user, or as root, will be able to easily view the password via /proc.
It's pretty easy to use with python's subprocess:
new_env=copy.deepcopy(os.environ)
new_env["MY_PASSWORD_VAR"] = "my key data"
p = subprocess.Popen(["openssl",..., "-passin", "env:MY_PASSWORD_VAR"], env=new_env)
fd:number
This lets you tell openssl to read the credentials from a file descriptor, which it will assume is already open for reading. By using this you can write the key data directly from your process to openssl, with something like this:
r, w = os.pipe()
p = subprocess.Popen(["openssl", ..., "-passin", "fd:%i" % r], preexec_fn=lambda:os.close(w))
os.write(w, "my key data\n")
os.close(w)
This will keep your password secure from other users on the same system, assuming that they are logged in with a different account.
With the code above, you may run into issues with the os.write call blocking. This can happen if openssl waits for something else to happen before reading the key in. This can be addressed with asynchronous i/o (e.g. a select loop) or an extra thread to do the write()&close().
One drawback of this is that it doesn't work if you pass closeFds=true to subprocess.Popen. Subprocess has no way to say "don't close one specific fd", so if you need to use closeFds=true, then I'd suggest using the file: syntax (below) with a named pipe.
file:pathname
Don't use this with an actual file to store passwords! That should be avoided for many reasons, e.g. your program may be killed before it can erase the file, and with most journalling file systems it's almost impossible to truly erase the data from a disk.
However, if used with a named pipe with restrictive permissions, this can be as good as using the fd option above. The code to do this will be similar to the previous snippet, except that you'll need to create a fifo instead of using os.pipe():
pathToFifo = my_function_that_securely_makes_a_fifo()
p = subprocess.Popen(["openssl", ..., "-passin", "file:%s" % pathToFifo])
fifo = open(pathToFifo, 'w')
print >> fifo, "my key data"
fifo.close()
The print here can have the same blocking i/o problems as the os.write call above, the resolutions are also the same.
No, it is not safe. No matter what openssl does with its command line after it has started running, there is still a window of time during which the information is visible in the process' command line: after the process has been launched and before it has had a chance to null it out.
Plus, there are many ways for an accident to happen: for example, the command line gets logged by sudo before it is executed, or it ends up in a shell history file.
Openssl supports plenty of methods of passing sensitive information so that you don't have to put it in the clear on the command line. From the manpage:
pass:password
the actual password is password. Since the password is visible to utilities (like 'ps' under Unix) this form should only be used where security is not important.
env:var
obtain the password from the environment variable var. Since the environment of other processes is visible on certain platforms (e.g. ps under certain Unix OSes) this option should be used with caution.
file:pathname
the first line of pathname is the password. If the same pathname argument is supplied to -passin and -passout arguments then the first line will be used for the input password and the next line for the output password. pathname need not refer to a regular file: it could for example refer to a device or named pipe.
fd:number
read the password from the file descriptor number. This can be used to send the data via a pipe for example.
stdin
read the password from standard input.
All but the first two options are good.

Resources