I am trying to submit 10 jobs using bsub command on a specific location.
$ bsub -q alloc -P acc_CLASSNAME\
> -J "Array_#4[1-10]"\
> -o "Output.%I" -n 1\
> -W 2:00 $HOME/bash/count.sh 1
when I run this, I am keep getting an error
Run limit must be specified using bsub -W.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Request aborted by esub. Job not submitted.
I am not sure why I am getting such error because I clearly limited the time with -W command on the last line.
Can some one please help me to fix this problem?
Thank you
update with prepended commands -
$ bash -x bsub -q alloc -P acc_BSR1015
+ '[' -z '' ']'
+ case "$-" in
+ __lmod_vx=x
+ '[' -n x ']'
+ set +x
bash -x -W 120 $HOME/bash/count.sh 1
bash: -W: invalid option
Usage: bash [GNU long option] [option] ...
bash [GNU long option] [option] script-file ...
Talk to your cluster administrator. Esub is a feature that allows the cluster admin to check the incoming request and reject it if it doesn't meet the site policies.
Related
I am running a process process_a in loop .
I want to get top -H logs stored for all process_a running in loop.
top -b -H -p `pgrep -d, -f process_a`
the above command gave logs for process_a for first loop only
is there a way to get to get top logs for upcoming loops as well ?
This script will repeat forever. I added the -n 1 option. This allows you to rerun the pgrep for each iteration. Note: I used init for the name of the process. Change that to process_a for yours.
#!/bin/sh
while [ true ]; do
top -b -H -n 1 -p `pgrep -d, -f init`
sleep 1
done
I'm trying to collect some data at every second to different file(preferably timed name file). I'm trying to use watch command but it's not behaving as per expectation.
watch -p -n 1 "curl -s http://127.0.0.1:9273/metrics > `date +'%H-%M-%S'`.txt"
Only 1 file is created and data is being directed to it. I was expecting it to write to different files. I'm not looking to alternative methods. Can it be modified to achieve said task?
quote it with single quote
or wrap the command line passed to watch , with bash -c
pay attention to the quotes i used, they can not be swapped
both following command works for one second per file
watch -p -n 1 'curl -s http://127.0.0.1:9273/metrics > `date +'%H-%M-%S'`.txt'
watch -p -n 1 'bash -c "curl -s http://127.0.0.1:9273/metrics > `date +'%H-%M-%S'`.txt"'
I've been try to make cURL on a huge loop and I run the cURL into background process with bash, there are about 904 domains that will be cURLed
and the problem is that 904 domains can't all be embedded because of the PID limit on the Linux kernel. I have tried adding pid_max to 4194303 (I read in this discussion Maximum PID in Linux) but after I checked only domain 901 had run in background proccess, before I added pid_max is only around 704 running in the background process.
here is my loop code :
count=0
while IFS= read -r line || [[ -n "$line" ]];
do
(curl -s -L -w "\\n\\nNo:$count\\nHEADER CODE:%{http_code}\\nWebsite : $line\\nExecuted at :$(date)\\n==================================================\\n\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line >> output.log) &
(( count++ ))
done < $FILE_NAME
Anyone have another solution or fix it to handle huge loop to run cURL into background process ?
a script example.sh can be created
#!/bin/bash
line=$1
curl -s -L -w "\\n\\nNo:$count\\nHEADER CODE:%{http_code}\\nWebsite : $line\\nExecuted at :$(date)\\n==================================================\\n\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line >> output.log
then the command could be (to limit number of running process at a time to 50)
xargs -n1 -P50 --process-slot-var=count ./example.sh < "$FILE_NAME"
Even if you could run that many processes in parallel, it's pointless - starting that many DNS queries to resolve 900+ domain names in a short span of time will probably overwhelm your DNS server, and having that many concurrent outgoing HTTP requests at the same time will clog your network. A much better approach is to trickle the processes so that you run a limited number (say, 100) at any given time, but start a new one every time one of the previously started ones finishes. This is easy enough with xargs -P.
xargs -I {} -P 100 \
curl -s -L \
-w "\\n\\nHEADER CODE:%{http_code}\\nWebsite : {}\\nExecuted at :$(date)\\n==================================================\\n\\n" \
-H "X-Gitlab-Event: Push Hook" \
-H "X-Gitlab-Token: $SECRET_KEY" \
--insecure {} <"$FILE_NAME" >output.log
The $(date) result will be interpolated at the time the shell evaluates the xargs command line, and there is no simple way to get the count with this mechanism. Refactoring this to put the curl command and some scaffolding into a separate script could solve these issues, and should be trivial enough if it's really important to you. (Rough sketch:
xargs -P 100 bash -c 'count=0; for url; do
curl --options --headers "X-Notice: use double quotes throughout" \
"$url"
((count++))
done' _ <"$FILE_NAME" >output.log
... though this will restart numbering if xargs receives more URLs than will fit on a single command line.)
TASK - SSH to 650 Servers and fetch few details from them and then write the completed server name in different file. How can do it in faster way? If I do normal ssh it takes 7 Minutes. So, I read about awk and wrote following 2 codes.
Could you please explain me the difference in the following codes?
Code 1 -
awk 'BEGIN{done_file="/home/sarafa/AWK_FASTER/done_status.txt"}
{
print "blah"|"ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=1 -o ConnectionAttempts=1 "$0" uname >/dev/null 2>&1";
print "$0" >> done_file
}' /tmp/linux
Code 2 -
awk 'BEGIN{done_file="/home/sarafa/AWK_FASTER/done_status.txt"}
{
"ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=1 -o ConnectionAttempts=1 "$0" uname 2>/dev/null"|getline output;
print output >> done_file
}' /tmp/linux
When I run these codes for 650 Servers, Code 1 takes - 30 seconds and Code 2 takes 7 Minutes ?
Why is there so much time difference ?
File - /tmp/linux is a list of 650 servers
Updated Answer - with thanks to #OleTange
This form is preferable to my suggestion:
parallel -j 0 --tag --slf /tmp/linux --nonall 'hostname;ls'
--tag Tag lines with arguments. Each output line will be prepended
with the arguments and TAB (\t). When combined with --onall or
--nonall the lines will be prepended with the sshlogin
instead.
--nonall --onall with no arguments. Run the command on all computers
given with --sshlogin but take no arguments. GNU parallel will
log into --jobs number of computers in parallel and run the
job on the computer. -j adjusts how many computers to log into
in parallel.
This is useful for running the same command (e.g. uptime) on a
list of servers.
Original Answer
I would recommend using GNU Parallel for this task, like this:
parallel -j 64 -k -a /tmp/linux 'echo ssh user#{} "hostname; ls"'
which will ssh into 64 hosts in parallel (you can change the number), run hostname and ls on each and then give you all the results in order (-k switch).
Obviously remove the echo when you see how it works.
I have an old Syno NAS and wish to use the "shred" command to wipe this disks inside. The idea is to let the command run to complete on the box itself without the need of a computer.
So far I have managed...
1) to get the right parameters for 'shred'
* runs in the background using the &
2) get that command to output the progress (-v option) to a file shred.txt
* to see from the file what the progress is
shred -v -f -z -n 2 /dev/hdd 2>&1 | tee /volume1/backup/shred.txt &
3) ssh tunnel the command so I can turn off my laptop while its running
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd > /dev/null 2>&1 &'"
The problem is that I can't combine 2) and 3)
I tried to combine them like this, but the resulting file remained empty:
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 | tee /volume1/backup/shred.txt > /dev/null &'"
It might be a case of the NOOBS but I can't figure out how to get this done.
Any suggestions?
Thanks. Vince
Commands sh and tee are not needed in here:
ssh -n root#host 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 >/volume1/backup/shred.txt &' >/dev/null
The final >/dev/null is optional, it will just disregard any greetings from other hosts.
Tried the following command (based on Grzegorz suggestion) and included the opening date stamp and the before mentioned - stupidly forgotten - verbose switch. Last version of the command string:
ssh -n root#host 'date > /volume1/backup/shred_sda.txt; nohup /opt/bin/shred -v -f -z -n 4 /dev/sda 2>&1 >> /volume1/backup/shred_sda.txt # >/dev/null'
The last thing to figure out is how to include the date stamp when the shred command has completed.