When I am trying to copy directory from Linux home directory to usb drive(pen drive), In the case of following command cp -r /home/directoryname /media/usbname(pendrivename) it is working fine.
But I am looking the command , copy the directory with out giving "usbname(pendrivename)"
Not sure I have understood your demand, but if I have a script like this should do the trick if your USB drive is always monted with the same tag :
#!/bin/bash
cp -r $1 /media/usbname(pendrivename)
If you save the script as ~/cpusb.sh, you can do :
chmod cpusb.sh
echo "alias cpusb=cpusb.sh" >> .bash_aliases
source
and then use cpusb when you want.
I think i'd use mount to textually infer the connected usb's dir using sed or awk.
Maybe even save mount's result when nothing is connected and 'subtract' it from mount's result after connecting a new usb device.
Or even better, run your script before you connect the device:
- the script will run mount every second and will wait for a change in the result.
- when a change is detected, the newly added device is your usb.
Something like:
#!/bin/bash
mount_old="$(mount)"
mount_new="${mount_old}"
while [[ "${mount_new}" == "${mount_old}" ]]; do
sleep 1
mount_new="$(mount)"
done
# getting added line using sort & uniq
sort <(echo "${mount_old}") <(echo "${mount_new}") | uniq -u | awk '{ print $3 }'
# another way to achieve this using diff & grep
# diff <(echo "${mount_old}") <(echo "${mount_new}") | grep ">" | awk '{ print $4 }'
It's merely a sketch, you might need/want to refine it.
Related
I was watching the infamous beginners' network pentesting video of Heath Adams and was attempting to make nmap staging script.
Can someone explain why I am getting this irksome permission denied error where I defined the ports variable even though my script has been running without a hitch up until this point?
Here is the staging script I am attempting:
#!/bin/bash
#creating a temp directory to store the output of initial scan
mkdir tempStager
#scannig with given flags and storing the results
echo beginning nmap scan
nmap $* > tempStager/scan.txt
echo basic nmap scan complete
#retrieving open ports
cat tempStager/scan.txt |grep tcp |cut -d " " -f 1| tr -d "/tcp" > tempStager/ports.txt
sleep 2
ports=cat tempStager/ports.txt| awk '{printf "%s,",$0}' tempStager/ports.txt
ip=echo $* | awk 'NF{ print $NF }'
#scanning with -A
#echo ""
#echo starting nmap scan with -A
#nmap -A -p$ports $ip
#removing temp directory
#rm -r tempStager```
ports=cat tempStager/ports.txt| awk '{printf "%s,",$0}' tempStager/ports.txt
assigns the variable the value "cat" and then tries to execute tempStager/ports.txt as an executable. But the file is not an executable (it doesn't have the x bit set, so it cannot be executed.
ports only exists for the runtime of the (would-be) program, it is not available after the program has terminated (which the program does immediately, because your shell fails to run it).
You are also specifying stdin and a file to awk.
If you want to assign the output of awk to a variable, you must use command substitution:
ports="$(awk '{printf "%s,",$0}' tempStager/ports.txt)"
I have the following script in bash:
ssh user#1.1.1.1 "echo 'start'
mkdir -p /home/user/out
cp /tmp/big_file /home/user/out
echo 'syncing flash'
sync
while [[ $(cat /proc/meminfo | grep Dirty | awk '{print $2}') -ne 0 ]] ; do
echo \"$(cat /proc/meminfo)\"
sleep 1
sync
done
echo 'done'"
I have my host PC and a target PC which I am copying to. Before I run this script I have already scp'd a big file into /tmp on the target.
When I run this script it copies the file /tmp/big ok, but when it enters the loop to sync the flash and I wait for meminfo Dirty to get to zero what I see is always Dirty: 74224 kB repeated in the loop.
However in a different ssh session logged in to the target I have it running:
watch -n1 "cat /proc/meminfo | grep Drity"
And I see this count down from ~74000kb to 0kB.
The difference is that the ssh session doing the watch is logged in as root and the ssh is logged in a user.
So I did the same test with the ssh shell logged in as user and I saw always 0kb in Drity...
Does this imply that the user can't read meminfo relating to the whole system? - how can I tell when the flash has sync'd as a non-root user?
Since the argument to ssh is in double quotes, variables and command substitutions are expanded locally on the client before sending the command, they're not done on the remote machine. Since they're substituted on the client, you'll obviously get the same result each time through the loop (because the client isn't looping).
You should either escape the $ characters so they're sent to the server, or put the command inside single quotes (but the latter makes it difficult to include single quotes in the command).
ssh user#1.1.1.1 "echo 'start'
mkdir -p /home/user/out
cp /tmp/big_file /home/user/out
echo 'syncing flash'
sync
while [[ \$(awk '/Dirty/ {print \$2}' /proc/meminfo) -ne 0 ]] ; do
cat /proc/meminfo
sleep 1
sync
done
echo 'done'"
There's also no need for cat /proc/meminfo and grep Dirty in the command substitution. awk can do pattern matching and take a filename argument.
I have been monitoring the performance of my Linux server with ioping (had some performance degradation last year). For this purpose I created a simple script:
echo $(date) | tee -a ../sb-output.log | tee -a ../iotest.txt
./ioping -c 10 . 2>&1 | tee -a ../sb-output.log | grep "requests completed in\|ioping" | grep -v "ioping statistics" | sed "s/^/IOPing I\/O\: /" | tee -a ../iotest.txt
./ioping -RD . 2>&1 | tee -a ../sb-output.log | grep "requests completed in\|ioping" | grep -v "ioping statistics" | sed "s/^/IOPing seek rate\: /" | tee -a ../iotest.txt
etc
The script calls ioping in the folder /home/bench/ioping-0.6. Then it saves the output in readable form in /home/bench/iotest.txt. It also adds the date so I can compare points in time.
Unfortunately I am no experienced programmer and this version of the script only works if you first enter the right directory (/home/bench/ioping-0.6).
I would like to call this script from anywhere. For example by calling
sh /home/bench/ioping.sh
Googling this and reading about path variables was a bit over my head. I kept up ending up with different version of
line 3: ./ioping: No such file or directory
Any thoughts on how to upgrade my scripts so that it works anywhere?
The trick is the shell's $0 variable. This is set to the path of the script.
#!/bin/sh
set -x
cd $(dirname $0)
pwd
cd ${0%/*}
pwd
If dirname isn't available for some reason, like some limited busybox distributions, you can try using shell parameter expansion tricks like the second one in my example.
Isn't it obvious? ioping is not in . so you can't use ./ioping.
Easiest solution is to set PATH to include the directory where ioping is. perhaps more robust - figure out the path to $0 and use that path as the location for ioping (assing your script sits next to ioping).
If iopinf itself depend on being ruin in a certain directory, you might have to make your script cd to the ioping directory before running.
There are other threads with this same topic but my issue is unique. I am running a bash script that has a function that sshes to a remote server and runs a sudo command on the remote server. I'm using the ssh -t option to avoid the requiretty issue. The offending line of code works fine as long as it's NOT being called from within the while loop. The while loop basically reads from a csv file on the local server and calls the checkAuthType function:
while read inputline
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done < configfile.csv
This is the function that sits at the top of the script (outside of any while loops):
function checkAuthType()
{
if [ $2 == linux ]; then
LINE=`ssh -t $1 'sudo grep "PasswordAuthentication" /etc/ssh/sshd_config | grep -v "yes\|Yes\|#"'`
fi
if [ $2 == unix ]; then
LINE=`ssh -n $1 'grep "PasswordAuthentication" /usr/local/etc/sshd_config | grep -v "yes\|Yes\|#"'`
fi
<more irrelevant code>
}
So, the offending line is the line that has the sudo command within the function. I can change the command to something simple like "sudo ls -l" and I will still get the "stdin is not a terminal" error. I've also tried "ssh -t -t" but to no avail. But if I call the checkAuthType function from outside of the while loop, it works fine. What is it about the while loop that changes the terminal and how do I fix it? Thank you one thousand times in advance.
Another option to try to get around the problem would be to redirect the file to a different file descriptor and force read to read from it instead.
while read inputline <&3
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done 3< configfile.csv
I am guessing you are testing with linux. You should try add the -n flag to your (linux) ssh command to avoid having ssh read from stdin - as it normally reads from stdin the while loop is feeding it your csv.
UPDATE
You should (usually) use the -n flag when scripting with SSH, and the flag is typically needed for 'expected behavior' when using a while read-loop. It does not seem to be the main issue here, though.
There are probably other solutions to this, but you could try adding another -t flag to force pseudo-tty allocation when stdin is not a terminal:
ssh -n -t -t
BroSlow's approach with a different file descriptor seems to work! Since the read command reads from fd 3 and not stdin,
ssh and hence sudo still have or get a tty/pty as stdin.
# simple test case
while read line <&3; do
sudo -k
echo "$line"
ssh -t localhost 'sudo ls -ld /'
done 3<&- 3< <(echo 1; sleep 3; echo 2; sleep 3)
I'm writing a script to read from a input file, which contains ~1000 lines of host info. The script ssh to each host, cd to the remote hosts log directory and cat the latest daily log file. Then I redirect the cat log file locally to do some pattern matching and statistics.
The simplified structure of my program is a while loop looks like this:
while read host
do
ssh -n name#$host "cd TO LOG DIR AND cat THE LATEST LOGFILE" | matchPattern
done << EOA
$(awk -F, '{print &7}' $FILEIN)
EOA
where matchPattern is a function to match pattern and do statistics.
Right now I got 2 questions for this:
1) How to find the latest daily log file remotely? The latest log file name matches xxxx2012-05-02.log and is newest created, is it possible to do ls remotely and find the file matching the xxxx2012-05-02.log file name?(I can do this locally but get jammed when appending it to ssh command) Another way I could come up with is to do
cat 'ls -t | head -1' or
cat $(ls -t | head -1)
However if I append this to ssh, it will list my local newest created file name, can we set this to a remote variable so that cat will find the correct file?
2) As there are nearly 1000 hosts, I'm wondering can I do this in parallel (like to do 20 ssh at a time and do the next 20 after the first 20 finishes), appending & to each ssh seems not suffice to accomplish it.
Any ideas would be greatly appreciated!
Follow up:
Hi everyone, I finally find a crappy way do solve the first problem by doing this:
ssh -n name#$host "cd $logDir; cat *$logName" | matchPattern
Where $logName is "today's date.log"(2012-05-02.log). The problem is that I can only use local variables within the double quotes. Since my log file ends with 2012-05-02.log, and there is no other files ends with this suffix, I just do a blindly cat *2012-05-02.log on remote machine and it will cat the desired file for me.
For your first question,
ssh -n name#$host 'cat $(ls -t /path/to/log/dir/*.log | head -n 1)'
should work. Note single quotes around the remote command.
For your second question, wrap all the ssh | matchPattern | analyse stuff into its own function, then iterate over it by
outstanding=0
while read host
do
sshMatchPatternStuff &
outstanding=$((outstanding + 1))
if [ $outstanding -ge 20 ] ; then
wait
outstanding=$((outstanding - 1))
fi
done << EOA
$(awk -F, '{print &7}' $FILEIN)
EOA
while [ $outstanding -gt 0 ] ; do
wait
outstanding=$((outstanding - 1))
done
(I assume you're using bash.)
It may be better to separate the ssh | matchPattern | analyse stuff into its own script, and then use a parallel variant of xargs to call it.
for your second question, take a look at parallel distributed shell:
http://sourceforge.net/projects/pdsh/
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel -j0 --nonall --slf <(awk -F, '{print $7}' servers.txt) 'cd logdir; cat `ls -t | head -1` | grep pattern'
This way you get the matching done on the remote server. If you prefer to transfer the full log file and do the matching locally, simply move the grep outside:
parallel -j0 --nonall --slf <(awk -F, '{print $7}' servers.txt) 'cd logdir; cat `ls -t | head -1`' | grep pattern
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1