Go mail sender inside bash, triggered by cron - linux

I have a Go program that sends mail to me.
package main
import (
"log"
"net/smtp"
"os"
)
func main() {
send(os.Args[2] + " program completed.", os.Args[1], os.Args[2], os.Args[3])
}
func send(body string, to string, s string, date string) {
from := "foo#gmail.com"
pass := "bar"
msg := "From: " + from + "\n" +
"To: " + to + "\n" +
"Subject: "+ s + " main\n\n" +
body + "\n" + date
err := smtp.SendMail("smtp.gmail.com:587",
smtp.PlainAuth("", from, pass, "smtp.gmail.com"),
from, []string{to}, []byte(msg))
if err != nil {
log.Printf("smtp error: %s", err)
return
}
log.Print("sent, visit mail address: "+to)
}
And a bash script that runs it with a mailing list for future preparation,
Do things.....
.
.
filename='list'
while read line; do
# reading each line of list
echo "$(date '+%d-%m-%Y-%T') Mail sent to address : $line" >> ${now}-log.log
./mailsend ${line} foo ${date}
done < $filename
Do final things .....
As you can see that there are simple log attempts to see if the program worked well.
And there are no errors.
When I trigger the program by hand, it works perfectly but from the bash script which is triggered by a cronjob it does not work.
Any suggestions ?
edit1: Variables are solid. Manuel trigger works as intended. When triggered by cron, I do not receive mails.

Maybe you need to cd to your directory before calling ./mailsend?

Related

Bash output limited to echo only

I am writing a bash script to handle by backups. I have created a message function controller that uses functions to handle email, log and output.
So the structure is as:
message_call(i, "This is the output")
Message Function
-> Pass to email function
--> Build email file
-> Pass to log function
--> Build log file
-> Pass to echo function (custom)
--> Format and echo input dependent on $1 as a switch and $2 as the output message
When I echo I want nice clean output that only consists of messages passed to the echo function, I can point all output /dev/null but I am struggling to limit all output except for the echo command.
Current output sample:
craig#ubuntu:~/backup/functions$ sudo ./echo_function.sh i test
+ SWITCH=i
+ INPUT=test
+ echo_function
+ echo_main
+ echo_controller i test
+ '[' i == i ']'
+ echo_info test
+ echo -e '\e[32m\e[1m[INFO]\e[0m test'
[INFO] test
+ echo test
test
+ '[' i == w ']'
+ '[' i == e ']'
Above I ran the echo function alone and the output I want is on line 10, all other output in the sample I don't want.
If you have the line set -x in your script, comment it out. If not, try adding set +x at the top of your script.
If you want to hide all the output from everything except what you're explicitly doing in your echo function you could do something like this:
exec 7>&1 # save a copy of current stdout
exec >/dev/null # redirect everyone else's stdout to /dev/null
ls # output goes to /dev/null
echo My Message >&7 # output goes to "old" stdout

I have a linux script that runs flawlessly from the command line but refuses to work as a cron

This script runs when logged in but I cannot get it to work under a cron job, any ideas?
This script is used to monitor a wep page for changes if it detects any graphic/visual differences it has to send an email.
I've read other posts about such problems, and tried to implement some of the suggestions, but I still get errors and even if there are changes I don't get the email. (FYI: The email has been changed for confidentiality purposes)
#!/usr/local/cpanel/3rdparty/bin/perl
`SHELL=/bin/bash`;
`PATH=/usr/local/jdk/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/X11R6/bin:/root/bin`;
`cd /root/pagechange`;
`rm -f null`;
`wkhtmltoimage --no-images --height 3000 --javascript-delay 7500 http://www.google.com /root/pagechange/sys.jpg`;
`$dif=/usr/bin/compare -metric AE /root/pagechange/sys.jpg /root/pagechange/sys1.jpg null: 2>&1`;
print "1";
if ( $dif == 0 ) {
print "They're equal\n";
} else {
$to = 'me#domain.com';
$from = 'you#domain.com';
$subject = 'Page changes detected ';
$message = "Get to work";
print "2";
open(MAIL, "|/usr/sbin/sendmail -t");
# Email Header
print MAIL "To: $to\n";
print MAIL "From: $from\n";
print MAIL "Subject: $subject\n\n";
# Email Body
print MAIL $message;
print "3";
close(MAIL);
print "Email Sent Successfully\n";
}
print "4";
`cp /root/pagechange/sys.jpg /root/pagechange/sys1.jpg`;
print "5";
#`rm /root/pagechange/index.html`;
exit
Here's what that might look like in Perl. I only modified the most obvious problems:
#!/usr/bin/perl
use strict;
use warnings;
# Set the environment with the %ENV hash
# environment variables set in a subshell will not persist
$ENV{SHELL} = '/bin/bash'; # although you don't need this
$ENV{PATH} = ...;
# change directory
chdir '/root/pagechange' or die "Could not change directory: $!";
# remove a file with unlink
unlink 'null';
# list argument form of system
# this prevents arguments from being treated as special by the shell
# use full path to executable so you know which one you use
system '/path/to/wkhtmltoimage', qw(
--no-images --height 3000 --javascript-delay 7500
http://www.google.com /root/pagechange/sys.jpg
);
# save the result of the backticks to get the program output
my $dif = `/usr/bin/compare -metric AE /root/pagechange/sys.jpg /root/pagechange/sys1.jpg null: 2>&1`;
print "1";
if ( $dif == 0 ) {
print "They're equal\n";
}
else {
my $to = 'me#domain.com';
my $from = 'you#domain.com';
my $subject = 'Page changes detected';
my $message = "Get to work";
print "2";
open(MAIL, "|/usr/sbin/sendmail -t");
# Email Header
print MAIL "To: $to\n";
print MAIL "From: $from\n";
print MAIL "Subject: $subject\n\n";
# Email Body
print MAIL $message;
print "3";
if( close(MAIL) ){
print "Email Sent Successfully\n";
}
else { # close puts the error in $? instead of $! (until 5.22!)
my $error = $? >> 8;
print "Problem sending mail: $error";
}
}
print "4";
# list argument form of system, again
system '/bin/cp', qw(/root/pagechange/sys.jpg /root/pagechange/sys1.jpg);
print "5";
# unlink '/root/pagechange/index.html';
Besides anything else, it has to be:
$dif=`/usr/bin/compare -metric AE /root/pagechange/sys.jpg /root/pagechange/sys1.jpg null: 2>&1`;
Your not assigning anything to $dif as variable within your script in your code. And whatever you do in the shell your way, it stays in the shell.
edit: That alone surely will not solve your problem, check the other comments

Get process executed by MONO on GNU/Linux

I am using MONO to execute an application. Using ps command shows eihter processname MONO or CLI.
How can I get the name of the application executed by MONO ?
Example : mono myApp.exe
I want to know, if myApp.exe is currently excecuted. Finally I want to do this check programmatically.
Cheers.
You usually will run your program from a shell script and there you can use the -a flag to exec:
#!/bin/bash
exec -a VisibleName mono program.exe
Here is a solution that uses .NET/MONO functions (no need to invoke native DLLs nor piping Shell Output):
List all processes.
If a process Name contains MONO or CLI then read
the commandline of that process
The commandline shall contain all
needed Information to identify your application
public static int process_count(string application_name)
{
int rc = 0;
string cmdline = "";
Process[] processlist = Process.GetProcesses();
foreach (Process p in processlist)
{
cmdline = "";
//Console.WriteLine("PID : " + theprocess.Id + " " + theprocess.ProcessName);
if (p.ProcessName.Contains("mono"))
{
Console.WriteLine("PID : " + p.Id + " " + p.ProcessName + " " + p.MainModule.FileName);
cmdline = File.ReadAllText("/proc/" + p.Id.ToString() + "/cmdline");
Console.WriteLine("CMDLINE : "+cmdline);
}
if (p.ProcessName.Contains("cli"))
{
Console.WriteLine("PID : " + p.Id + " " + p.ProcessName + " " + p.MainModule.FileName);
cmdline = File.ReadAllText("/proc/" + p.Id.ToString() + "/cmdline");
Console.WriteLine("CMDLINE : " + cmdline);
}
if (cmdline.Contains(application_name))
{
Console.WriteLine("Found existing process: {0} ID: {1}", p.ProcessName, p.Id);
rc++;
}
}
return (rc);
}
How to do it manually:
invoke ps -e to get all processes of MONO or CLI
Look up the PID e.g. 2845
Display command line : cat /proc/2845/cmdline
Note for newbies: this Approach is not dedicated to Windows OS as it does not Support the concept of /proc filesystem.
Cheers
Have a look at
getting-mono-process-info
The solution uses the same approach of reading /proc but has more options.
Cheers

Running sh/bash/python scripts with arguments using Go

I've been stuck on this one a few days, I'm trying to run a bash script which runs off of the first argument (maybe I should give up all hope, haha)
Syntax for running the script can be assumed to be:
sudo bash script argument or since it has og+x it can be ran as just sudo script argument
In go I'm running it using the following:
package main
import (
"os"
"os/exec"
"fmt"
)
func main() {
c := exec.Command("/bin/bash", "script " + argument)
if err := c.Run(); err != nil {
fmt.Println("Error: ", err)
}
os.Exit(0)
}
I have had absolutely no luck, I've tried loads of other variations as well for this...
exec.Command("/bin/sh", "-c", "sudo script", argument)
exec.Command("/bin/sh", "-c", "sudo script " + argument) (my first try)
exec.Command("/bin/bash", "-c", "sudo script" + argument)
exec.Command("/bin/bash", "sudo script", argument)
exec.Command("/bin/bash sudo script" + argument)
Most of these I am met with '/bin/bash sudo ect' no such file or directory, or Error: exit status 1 I have even gone as far as to write a Python wrapper looking for an argument and executing the bash script with subprocess. To rule out the path to the script not being defined I have tried all of the above with a direct route to the script rather than script name.
For the sake of my remaining hair, what am I doing wrong here? How can I better diagnose this problem so that I can get more information rather than exit status 1?
You don't need to call bash/sh at all, simply pass each argument alone, also to get the error you have to capture the command's stderr, here's a working example:
func main() {
c := exec.Command("sudo", "ls", "/tmp")
stderr := &bytes.Buffer{}
stdout := &bytes.Buffer{}
c.Stderr = stderr
c.Stdout = stdout
if err := c.Run(); err != nil {
fmt.Println("Error: ", err, "|", stderr.String())
} else {
fmt.Println(stdout.String())
}
os.Exit(0)
}

wget: reading from a list with id numbers and urls

In a .txt file, I have 500 lines containing an id number and a website homepage URL, in the following way
id_345 http://www.example1.com
id_367 http://www.example2.org
...
id_10452 http://www.example3.net
Using wget and the -i option, I am trying to download recursively part of these websites, but I would like to store the files in a way that is linked with the id number (storing the files in a directory called like the id number, or - the best option, but i think the most difficult to achieve - storing the html content in a single txt file called like the id number) .
Unfortunataly, the option -i cannot read a file like the one that i am using.
How can link the websites content with their connected id?
Thanks
P.s.: I imagine that to do so I have to 'go out' from wget, and call it through a script. If so, please take into account that I am a newbie in this sector (just some python experience), and that in particular I am not yet able to understand the logic and the code in bash scripts: step by step explanations for dummies are therefore very welcome.
Get site recursively with wget -P ... -r -l ... in Python, with parallel processing (gist is here):
import multiprocessing, subprocess, re
def getSiteRecursive(id, url, depth=2):
cmd = "wget -P " + id + " -r -l " + str(depth) + " " + url
subprocess.call(cmd, shell=True)
input_file = "site_list.txt"
jobs = []
max_jobs = multiprocessing.cpu_count() * 2 + 1
with open(input_file) as f:
for line in f:
id_url = re.compile("\s+").split(line)
if len(id_url) >= 2:
try:
print "Grabbing " + id_url[1] + " into " + id_url[0] + " recursively..."
if len(jobs) >= max_jobs:
jobs[0].join()
del jobs[0]
p = multiprocessing.Process(target=getSiteRecursive,args=(id_url[0],id_url[1],2,))
jobs.append(p)
p.start()
except Exception, e:
print "Error for " + id_url[1] + ": " + str(e)
pass
for j in jobs:
j.join()
Get single page into named file with Python:
import urllib2, re
input_file = "site_list.txt"
#open the site list file
with open(input_file) as f:
# loop through lines
for line in f:
# split out the id and url
id_url = re.compile("\s+").split(line)
print "Grabbing " + id_url[1] + " into " + id_url[0] + ".html..."
try:
# try to get the web page
u = urllib2.urlopen(id_url[1])
# save the GET response data to the id file (appended with "html")
localFile = open(id_url[0]+".html", 'wb+')
localFile.write(u.read())
localFile.close()
print "got " + id_url[0] + "!"
except:
print "Could not get " + id_url[0] + "!"
pass
Example site_list.txt:
id_345 http://www.stackoverflow.com
id_367 http://stats.stackexchange.com
Output:
Grabbing http://www.stackoverflow.com into id_345.html...
got id_345!
Grabbing http://stats.stackexchange.com into id_367.html...
got id_367!
Directory listing:
get_urls.py
id_345.html
id_367.html
site_list.txt
And if you prefer command line or shell scripting, you can use awk to read each line with the default splitting at spaces, pipe it to a loop and execute with the backtick:
awk '{print "wget -O " $1 ".html " $2}' site_list.txt | while read line ; do `$line` ; done
Breakdown...
awk '{print "wget -O " $1 ".html " $2}' site_list.txt |
Use the awk tool to read each line of the site_list.txt file and
split each line at spaces (default) into variables ($1, $2, $3,
etc.), so that your id is in $1 and your url is in $2.
Add the print AWK command to construct the call for wget.
Add the pipe operator | to send the output to the next command
Next we do the wget call:
while read line ; do `$line` ; done
Loop through the prior command output line by line, storing it into the $line variable, and execute it using the backtick operator to interpret the text and run it as a command

Resources