i am using this code its working fine when i am running it from root but when i set root priviledges to it throws up an error saying "insecure $ENV{PATH} at line system "perl $qtool -d $mqueue_directory*$queue_id";"
my script is in path /scripts/deferred.pl
#!/usr/bin/perl
use strict;
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
my #rf_id;
my #date;
my $temp
my #write_array;
my $to;
my $from;
use Untaint;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
use Untaint;
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf(\w{14})/ ) {
my $qf_file = $_;
my $queue_id = $1;
my $deferred = 0;
my $from_postmaster = 0;
my $delivery_failure = 0;
my $junk_mail = 0;
open (QF_FILE, $_);
while(<QF_FILE>) {
$deferred = 1 if ( /^MTemporarily/ | /^Mhost map: lookup/ | /^MUser unknown/ );
$delivery_failure = 1 if \
( /^H\?\?Subject: DELIVERY FAILURE: (User|Recipient)/ );
if ( $deferred && $from_postmaster && $delivery_failure ) {
$junk_mail = 1;
}
$temp=$qf_file.':';
if($junk_mail){
while(<QF_FILE>){
chomp;
if(/rRFC822;/){
$temp.=subdtr($_,9)
}
if(/H?D?Date:/){
$temp.=':'.substr($_,10);
push #write_array, $temp."\n";
}
}
}
}
close (QF_FILE);
my $subqueue_id = substr($queue_id,9);
if ($junk_mail) {
print "Removing $queue_id...\n";
system "perl $qtool -d $mqueue_directory*$queue_id";
$messages_removed++;
}
}
}
open (MYFILE,">/scripts/mail.txt");
print MYFILE "#write_array";
close (MYFILE);
$to='yagya#mydomain.in';
$from='system#mydomain.in';
$subject='deleted mails';
open(MAIL,"|/usr/sbin/sendmail -t");
print MAIL "To: $to\n";
print MAIL "From: $from\n";
print MAIL "Subject: $subject\n\n";
print MAIL "#write_array\n";
close(MAIL);
print "\n$messages_removed total \"double bounce\" message(s) removed from ";
print "mail queue.\n";
Setuid programs automatically run in taint mode. It's all explained in perlsec, including the text in your error message. Often, if you paste the error message into a search engine, you'll quickly find out what to do about it. You might also see Insecure $ENV{ENV} while running with -T switch.
Related
We use excel as a configuration file for clients. However, our processes only run on linux servers. We need to take a master file, update all the client workbooks with the new information, and commit to GitLab. The users then check it out, add their own changes, commit back to GitLab and a process promotes the workbook to Server A.
This process works great using nodeJS (exceljs)
Another process on a different server is using perl to pick up the workbook and then saves each sheet as a csv file.
The problem is, what gets written out is the data from the ORIGINAL worksheet and not the updated changes. This is true of both perl and nodejs. Code for perl and nodejs xlsx to csv is at the end of the post.
Modules Tried:
perl : Spreadsheet::ParseExcel; Spreadsheet::XLSX;
nodejs: node-xlsx, exceljs
I assume it has to do with Microsoft using XML inside the excel wrapper, it keeps the old version as history and since it was the original sheet name, it gets pulled instead of the updated latest version.
When I manually open in Excel, everything is correct with the new info as expected.
When I use "Save as..." instead of "Save" then the perl process is able to correctly write out the updated worksheet as csv. So our workaround is having the users always "Save as.." before committing their extra changes to GitLab. We'd like to rely on training, but the sheer number of users and clients makes trusting that the user will "Save AS..." is not practical.
Is there a way to replicate a "Save As..." during my promotion to Server A or at least be able to tell if the file had been saved correctly? I'd like to stick with excelJS, but I'll use whatever is necessary to replicate the "Save as..." which seems to recompile the workbook.
In addition to nodejs, I can use perl, python, ruby - whatever it takes - to make sure the csv creation process picks up the new changes.
Thanks for your time and help.
#!/usr/bin/env perl
use strict;
use warnings;
use Carp;
use Getopt::Long;
use Pod::Usage;
use File::Basename qw/fileparse/;
use File::Spec;
use Spreadsheet::ParseExcel;
use Spreadsheet::XLSX;
use Getopt::Std;
my %args = ();
my $help = undef;
GetOptions(
\%args,
'excel=s',
'sheet=s',
'man|help'=>\$help,
) or die pod2usage(1);
pod2usage(1) if $help;
pod2usage(-verbose=>2, exitstatus=>0, output=>\*STDOUT) unless $args{excel} || $args{sheet};
pod2usage(3) if $help;
pod2usage(-verbose=>2, exitstatus=>3, output=>\*STDOUT) unless $args{excel};
if (_getSuffix($args{excel}) eq ".xls") {
my $file = File::Spec->rel2abs($args{excel});
if (-e $file) {
print _XLS(file=>$file, sheet=>$args{sheet});
} else {
exit 1;
die "Error: Can not find excel file. Please check for exact excel file name and location. \nError: This Program is CASE SENSITIVE. \n";
}
}
elsif (_getSuffix($args{excel}) eq ".xlsx") {
my $file = File::Spec->rel2abs($args{excel});
if (-e $file) {
print _XLSX(file=>$file, sheet=>$args{sheet});
}
else {
exit 1;
die "\nError: Can not find excel file. Please check for exact excel file name and location. \nError: This Program is CASE SENSITIVE.\n";
}
}
else {
exit 5;
}
sub _XLS {
my %opts = (
file => undef,
sheet => undef,
#_,
);
my $aggregated = ();
my $parser = Spreadsheet::ParseExcel->new();
my $workbook = $parser->parse($opts{file});
if (!defined $workbook) {
exit 3;
croak "Error: Workbook not found";
}
foreach my $worksheet ($workbook->worksheet($opts{sheet})) {
if (!defined $worksheet) {
exit 2;
croak "\nError: Worksheet name doesn't exist in the Excel File. Please check the WorkSheet Name. \nError: This program is CASE SENSITIVE.\n\n";
}
my ($row_min, $row_max) = $worksheet->row_range();
my ($col_min, $col_max) = $worksheet->col_range();
foreach my $row ($row_min .. $row_max){
foreach my $col ($col_min .. $col_max){
my $cell = $worksheet->get_cell($row, $col);
if ($cell) {
$aggregated .= $cell->value().',';
}
else {
$aggregated .= ',';
}
}
$aggregated .= "\n";
}
}
return $aggregated;
}
sub _XLSX {
eval {
my %opts = (
file => undef,
sheet => undef,
#_,
);
my $aggregated_x = ();
my $excel = Spreadsheet::XLSX->new($opts{file});
foreach my $sheet ($excel->worksheet($opts{sheet})) {
if (!defined $sheet) {
exit 2;
croak "Error: WorkSheet not found";
}
if ( $sheet->{Name} eq $opts{sheet}) {
$sheet->{MaxRow} ||= $sheet->{MinRow};
foreach my $row ($sheet->{MinRow} .. $sheet->{MaxRow}) {
$sheet->{MaxCol} ||= $sheet->{MinCol};
foreach my $col ($sheet->{MinCol} .. $sheet->{MaxCol}) {
my $cell = $sheet->{Cells}->[$row]->[$col];
if ($cell) {
$aggregated_x .= $cell->{Val}.',';
}
else {
$aggregated_x .= ',';
}
}
$aggregated_x .= "\n";
}
}
}
return $aggregated_x;
}
};
if ($#) {
exit 3;
}
sub _getSuffix {
my $f = shift;
my ($basename, $dirname, $ext) = fileparse($f, qr/\.[^\.]*$/);
return $ext;
}
sub _convertlwr{
my $f = shift;
my ($basename, $dirname, $ext) = fileparse($f, qr/\.[^\.]*$/);
return $ext;
}
var xlsx = require('node-xlsx')
var fs = require('fs')
var obj = xlsx.parse(__dirname + '/test2.xlsx') // parses a file
var rows = []
var writeStr = ""
//looping through all sheets
for(var i = 0; i < obj.length; i++)
{
var sheet = obj[i]
//loop through all rows in the sheet
for(var j = 0; j < sheet['data'].length; j++)
{
//add the row to the rows array
rows.push(sheet['data'][j])
}
}
//creates the csv string to write it to a file
for(var i = 0; i < rows.length; i++)
{
writeStr += rows[i].join(",") + "\n"
}
//writes to a file, but you will presumably send the csv as a
//response instead
fs.writeFile(__dirname + "/test2.csv", writeStr, function(err) {
if(err) {
return console.log(err)
}
console.log("test.csv was saved in the current directory!")
The answer is its impossible. In order to update data inside a workbook that has excel functions, you must open it in Excel for the formulas to trigger. It's that simple.
You could pull the workbook apart, create your own javascript functions, run the data through it and then write it out, but there are so many possible issues that it is not recommended.
Perhaps one day Microsoft will release a linux Excel engine API for linux. But its still unlikely that such a thing would work via command line without invoking the GUI.
I am developing a Perl script to query the PasteBin API using threads and DBD::SQLite to store information for later.
Upon running my script I get the following error:
DBD::SQLite::db do failed: near "day": syntax error at getpaste.pl line 113.
Thread 3 terminated abnormally: DBD::SQLite::db do failed: near "day": syntax error at getpaste.pl line 113.
Using my code to debug here's what I see in thread 3:
enum _Days {
Monday,
Tuesday,
Wednesday,
Thursday,
Friday,
Saturday,
Sunday
}
class HeadingItem implements ListItem {
String _weekday;
final int time;
final DocumentReference reference;
set day(String weekday) {
var value = _Days.values[int.parse(weekday) - 1].toString();
var idx = value.indexOf(".") + 1;
var result = value.substring(idx, value.length);
_weekday = result;
}
String get day {
return _weekday;
}
HeadingItem.fromMap(Map<String, dynamic> map, {this.reference})
: assert(map['day'] != null),
assert(map['time'] != null),
day = map['day'], // 'day' isn't a field in the enclosing class <--- this is the error that im stuck on...
time = map['time'];
HeadingItem.fromSnapshot(DocumentSnapshot snapshot) : this.fromMap(snapshot.data, reference: snapshot.reference);
}
If I had to make an educated guess it bombs out at String get day {
Here's a chunk of my code where this is relevant:
sub threadCheckKey {
my ($url, $key) = #_;
my $fullURL = $url.$key;
my #flaggedRegex = ();
my $date = strftime "%D", localtime;
my #data = ();
my $thread = threads->create(sub {
my $dbConnection = openDB();
open(GET_DATA, "curl -s " . $fullURL . " -k 2>&1 |") or die("$!");
open(WRITE_FILE, ">", $key . ".txt") or die("$!");
while(my $line = <GET_DATA>) {
print WRITE_FILE $line;
foreach my $regex(#regexs) {
if($line =~ m/$regex/) {
if(!($regex ~~ #flaggedRegex)) {
push(#flaggedRegex, $regex);
}
}
}
}
close(WRITE_FILE);
close(GET_DATA);
open(READ_FILE, $key . ".txt") or die("$!");
while(my $line = <READ_FILE>) {
push(#data, $line);
}
close(READ_FILE);
my $updateRow = qq(UPDATE $tables[0] set data = \'#data\', date = \'$date\', regex = \'#flaggedRegex\' where pastekey = \'$key\');
my $executeRowUpdate = $dbConnection->do($updateRow);
if($executeRowUpdate < 0) {
print $DBI::errstr;
}
Line 113 in this case is my $executeRowUpdate = $dbConnection->do($updateRow); Knowing Perl it's really complaining about my UPDATE statement just above it.
Where am I going wrong with this? I am a novice when it comes to interacting with anything sql related.
You need to log the $updateRow that is generated and then look at that and see what is wrong with it. Without that nobody knows.
The other issues ikegami notes in a comment above probably deserve new questions focused on their individual aspects. As you've discovered https://codereview.stackexchange.com/ is not for code with errors. But given all of the injection issues it might be time to try https://security.stackexchange.com/
If you fix those problems maybe your error will disappear too. Or not, but it is worth trying.
I currently have a script that kicks off threads to perform various actions on several directories. A snippet of my script is:
#main
sub BuildInit {
my $actionStr = "";
my $compStr = "";
my #component_dirs;
my #compToBeBuilt;
foreach my $comp (#compList) {
#component_dirs = GetDirs($comp); #populates #component_dirs
}
print "Printing Action List: #actionList\n";
#---------------------------------------
#---- Setup Worker Threads ----------
for ( 1 .. NUM_WORKERS ) {
async {
while ( defined( my $job = $q->dequeue() ) ) {
worker($job);
}
};
}
#-----------------------------------
#---- Enqueue The Work ----------
for my $action (#actionList) {
my $sem = Thread::Semaphore->new(0);
$q->enqueue( [ $_, $action, $sem ] ) for #component_dirs;
$sem->down( scalar #component_dirs );
print "\n------>> Waiting for prior actions to finish up... <<------\n";
}
# Nothing more to do - notify the Queue that we're not adding anything else
$q->end();
$_->join() for threads->list();
return 0;
}
#worker
sub worker {
my ($job) = #_;
my ( $component, $action, $sem ) = #$job;
Build( $component, $action );
$sem->up();
}
#builder method
sub Build {
my ( $comp, $action ) = #_;
my $cmd = "$MAKE $MAKE_INVOCATION_PATH/$comp ";
my $retCode = -1;
given ($action) {
when ("depend") { $cmd .= "$action >nul 2>&1" } #suppress output
when ("clean") { $cmd .= $action }
when ("build") { $cmd .= 'l1' }
when ("link") { $cmd .= '' } #add nothing; default is to link
default { die "Action: $action is unknown to me." }
}
print "\n\t\t*** Performing Action: \'$cmd\' on $comp ***" if $verbose;
if ( $action eq "link" ) {
# hack around potential race conditions -- will only be an issue during linking
my $tries = 1;
until ( $retCode == 0 or $tries == 0 ) {
last if ( $retCode = system($cmd) ) == 2; #compile error; stop trying
$tries--;
}
}
else {
$retCode = system($cmd);
}
push( #retCodes, ( $retCode >> 8 ) );
#testing
if ( $retCode != 0 ) {
print "\n\t\t*** ERROR IN $comp: $# !! ***\n";
print "\t\t*** Action: $cmd -->> Error Level: " . ( $retCode >> 8 ) . "\n";
#exit(-1);
}
return $retCode;
}
The print statement I'd like to be thread-safe is: print "\n\t\t*** Performing Action: \'$cmd\' on $comp ***" if $verbose; Ideally, I would like to have this output, and then each component that is having the $action performed on it, would output in related chunks. However, this obviously doesn't work right now - the output is interleaved for the most part, with each thread spitting out it's own information.
E.g.,:
ComponentAFile1.cpp
ComponentAFile2.cpp
ComponentAFile3.cpp
ComponentBFile1.cpp
ComponentCFile1.cpp
ComponentBFile2.cpp
ComponentCFile2.cpp
ComponentCFile3.cpp
... etc.
I considered executing the system commands using backticks, and capturing all of the output in a big string or something, then output it all at once, when the thread terminates. But the issue with this is (a) it seems super inefficient, and (b) I need to capture stderr.
Can anyone see a way to keep my output for each thread separate?
clarification:
My desired output would be:
ComponentAFile1.cpp
ComponentAFile2.cpp
ComponentAFile3.cpp
------------------- #some separator
ComponentBFile1.cpp
ComponentBFile2.cpp
------------------- #some separator
ComponentCFile1.cpp
ComponentCFile2.cpp
ComponentCFile3.cpp
... etc.
To ensure your output isn't interrupted, access to STDOUT and STDERR must be mutually exclusive. That means that between the time a thread starts printing and finishes printing, no other thread can be allowed to print. This can be done using Thread::Semaphore[1].
Capturing the output and printing it all at once allows you to reduce the amount of time a thread holds a lock. If you don't do that, you'll effectively make your system single-threaded system as each thread attempts lock STDOUT and STDERR while one thread runs.
Other options include:
Using a different output file for each thread.
Prepending a job id to each line of output so the output can be sorted later.
In both of those cases, you only need to lock it for a very short time span.
# Once
my $mutex = Thread::Semaphore->new(); # Shared by all threads.
# When you want to print.
$mutex->down();
print ...;
STDOUT->flush();
STDERR->flush();
$mutex->up();
or
# Once
my $mutex = Thread::Semaphore->new(); # Shared by all threads.
STDOUT->autoflush();
STDERR->autoflush();
# When you want to print.
$mutex->down();
print ...;
$mutex->up();
You can utilize the blocking behavior of $sem->down if it attempts to decrease the semaphore counter below zero, as mentioned in perldoc perlthrtut:
If down() attempts to decrement the counter below zero, it blocks
until the counter is large enough.
So here's what one could do:
Initialize a semaphore with counter 1 that is shared across all threads
my $sem = Thread::Semaphore->new( 1 );
Pass a thread counter to worker and Build
for my $thr_counter ( 1 .. NUM_WORKERS ) {
async {
while ( defined( my $job = $q->dequeue() ) ) {
worker( $job, $thr_counter );
}
};
}
sub worker {
my ( $job, $counter ) = #_;
Build( $component, $action, $counter );
}
Go ->down and ->up inside Build (and nowhere else)
sub Build {
my ( $comp, $action, $counter ) = #_;
... # Execute all concurrently-executed code here
$sem->down( 1 << ( $counter -1 ) );
print "\n\t\t*** Performing Action: \'$cmd\' on $comp ***" if $verbose;
# Execute all sequential 'chunks' here
$sem->up( 1 << ( $counter - 1) );
}
By using the thread counter to left-shift the semaphore counter, it guarantees that the threads won't trample on one another:
+-----------+---+---+---+---+
| Thread | 1 | 2 | 3 | 4 |
+-----------+---+---+---+---+
| Semaphore | 1 | 2 | 4 | 8 |
+-----------+---+---+---+---+
I've approached this problem differently in the past, by creating an IO thread, and using that to serialise the file access.
E.g.
my $output_q = Thread::Queue -> new();
sub writer {
open ( my $output_fh, ">", $output_filename );
while ( my $line = $output_q -> dequeue() ) {
print {$output_fh} $line;
}
close ( $output_fh );
}
And within threads, 'print' by:
$output_q -> enqueue ( "text_to_print\n"; );
Either with or without a wrapper - e.g. for timestamping statements if they're going to a log. (You probably want to timestamp when queued, rather than when actually printer).
I have a perl program on serverA, the program needs to process data for around 500 DOM ip's. The DOM files are on serverB. For each DOM i need to download 6 files to some formulas and inserted them on MySQL DB. For each DOM takes aproximately 2 minutes to download the files. I need to do that in the lowest time possible, because i have to that process aproximately every two hours.
Right now I am using multithreading:
my #threads;
for my $key (keys %dom) ### Have all DOM ip
{
print "El key es $key\n";
my %data = %{$dom{$key}};
my $t = threads->new(\&sub1, $postD, $preD, $key, $counter, %data);
push(#threads,$t);
if($counter == 40)
{
foreach (#threads) {
my $num = $_->join;
print "done with $num\n";
}
$counter = 1;
#threads=();
}
$counter++;
}
foreach (#threads) {
my $num = $_->join;
print "done with $num\n";
sub sub1
{
my ($postD, $preD, $key, $num, %data) = #_;
my $status = GetRelevantFiles(substr($postD,0,8),substr($preD,0,8),%data) if (!defined($opt_f));
if(ref($status) eq 'ERROR')
{
warnNotify($status->{'message'});
}
return $num;
}
Sometimes do no bring all the files.
I am doing good or there is another way to do it best??
Thanks a lot for your help!
You might consider replacing your threads with Parallel::ForkManager
As for not downloading all of the files:
Is there a consistency to the number not downloaded?
Are there any error messages?
I am using a Perl script which deletes the data from mqueue folder for sendmail.
When I setuid to that Perl script and try to run it from user it throws this message:
Insecure dependency in chdir while running setuid at /file/find
How to solve it and succesfully run the script with root priveleges?
!/usr/bin/perl
use strict;
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf(\w{14})/ ) {
my $qf_file = $_;
my $queue_id = $1;
my $deferred = 0;
my $from_postmaster = 0;
my $delivery_failure = 0;
my $double_bounce = 0;
open (QF_FILE, $_);
while(<QF_FILE>) {
$deferred = 1 if ( /^MDeferred/ );
$from_postmaster = 1 if ( /^S<>$/ );
$delivery_failure = 1 if \
( /^H\?\?Subject: DELIVERY FAILURE: (User|Recipient)/ );
if ( $deferred && $from_postmaster && $delivery_failure ) {
$double_bounce = 1;
last;
}
}
close (QF_FILE);
if ($double_bounce) {
print "Removing $queue_id...\n";
system "$qtool", "-d", $qf_file;
$messages_removed++;
}
}
}
print "\n$messages_removed total \"double bounce\" message(s) removed from ";
print "mail queue.\n";
"Insecure dependency" is a Taint thing: http://perldoc.perl.org/perlsec.html.
Taint is being enforced because you have run the script setuid. You need to specify untaint as an %option key to File::Find:
http://metacpan.org/pod/File::Find
my %options = (
wanted => \&wanted,
untaint => 1
);
find(\%options, $mqueue_directory);
You should also have a look at the untaint_pattern in the POD for File::Find.
You should build a program wrapper. On almost any unix system, a script can never get root privileges via the SetUID bit. You can find some usefull example here http://www.tuxation.com/setuid-on-shell-scripts.html