How can I show the wget progress bar only? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
For example:
wget http://somesite.com/TheFile.jpeg
downloading: TheFile.tar.gz ...
--09:30:42-- http://somesite.com/TheFile.jpeg
=> `/home/me/Downloads/TheFile.jpeg'
Resolving somesite.co... xxx.xxx.xxx.xxx.
Connecting to somesite.co|xxx.xxx.xxx.xxx|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1,614,820 (1.5M) [image/jpeg]
25% [======> ] 614,424 173.62K/s ETA 00:14
How can I get it to look like the following?
downloading: TheFile.jpeg ...
25% [======> ] 614,424 173.62K/s ETA 00:14
I know curl can do that. However, I need to get wget to do that job.

Use:
wget http://somesite.com/TheFile.jpeg -q --show-progress
-q: Turn off wget's output
--show-progress: Force wget to display the progress bar no matter what its verbosity level is set to

You can use the following filter:
progressfilt ()
{
local flag=false c count cr=$'\r' nl=$'\n'
while IFS='' read -d '' -rn 1 c
do
if $flag
then
printf '%s' "$c"
else
if [[ $c != $cr && $c != $nl ]]
then
count=0
else
((count++))
if ((count > 1))
then
flag=true
fi
fi
fi
done
}
Usage:
$ wget --progress=bar:force http://somesite.com/TheFile.jpeg 2>&1 | progressfilt
100%[======================================>] 15,790 48.8K/s in 0.3s
2011-01-13 22:09:59 (48.8 KB/s) - 'TheFile.jpeg' saved [15790/15790]
This function depends on a sequence of 0x0d0x0a0x0d0x0a0x0d being sent right before the progress bar is started. This behavior may be implementation dependent.

Run using these flags:
wget -q --show-progress --progress=bar:force 2>&1

You can use the follow option of tail:
wget somesite.com/TheFile.jpeg --progress=bar:force 2>&1 | tail -f -n +6
The +6 is to delete the first 6 lines. It may be different on your version of wget or your language.
You need to use --progress=bar:force otherwise wget switches to the dot type.
The downside is that the refreshing is less frequent than with wget (looks like every 2 seconds). The --sleep-interval option of tail seems to be meant just for that, but it didn't change anything for me.

The option --show-progress, as pointed out by others, is the best option, but it is available only since GNU wget 1.16, see Noteworthy changes in wget 1.16.
To be safe, we can first check if --show-progress is supported:
# set progress option accordingly
wget --help | grep -q '\--show-progress' && \
_PROGRESS_OPT="-q --show-progress" || _PROGRESS_OPT=""
wget $_PROGRESS_OPT ...
Maybe it's time to consider just using curl.

You can use standard options:
wget --progress=bar http://somesite.com/TheFile.jpeg

This is another example:
download() {
local url=$1
echo -n " "
wget --progress=dot $url 2>&1 | grep --line-buffered "%" | sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
echo -ne "\b\b\b\b"
echo " DONE"
}

Here is a solution that will show you a dot for each file (or line, for that matter). It is particularly useful if you are downloading with --recursive. This won't catch errors and may be slightly off if there are extra lines, but for general progress on a lot of files it is helpful:
wget -r -nv https://example.com/files/ | \
awk -v "ORS=" '{ print "."; fflush(); } END { print "\n" }'

This is not literally an answer but this snippet might also be helpful to some coming here for e.g. "zenity wget GUI":
LANG=C wget -O /dev/null --progress=bar:force:noscroll --limit-rate 5k http://nightly.altlinux.org/sisyphus/ChangeLog 2>&1 | stdbuf -i0 -o0 -e0 tr '>' '\n' | stdbuf -i0 -o0 -e0 sed -rn 's/^.*\<([0-9]+)%\[.*$/\1/p' | zenity --progress --auto-close
What was crucial for me is stdbuf(1).

Related

Passing static variables to GNU Parallel [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
In a bash script I am trying to pass multiple distinct fastq files and several user-provided static variables to GNU Parallel. I can't hardcode the static variables because while they do not change within the script, they are set by the user and are variable between uses. I have tried a few different ways but get an error argument -b/--bin: expected one argument
Attempt 1:
binSize="10000"
outputDir="output"
errors="1"
minReads="10"
ls fastq_F* | parallel "python myscript.py -f split_fastq_F{} -b $binSize -o $outputDir -e $errors -p -t $minReads"
Attempt 2:
my_func() {
python InDevOptimizations/DemultiplexUsingBarcodes_New_V1.py \
-f split_fastq_F$1 \
-b $binSize \
-o $outputDir \
-e $errors \
-p \
-t $minReads
}
export -f my_func
ls fastq_F* | parallel my_func
It seems clear that I am not correctly passing the static variables... but I can't seem to grasp what the correct way to do this is.
Always try --dr when GNU Parallel does not do what you expect.
binSize="10000"
outputDir="output"
errors="1"
minReads="10"
ls fastq_F* | parallel --dr "python myscript.py -f split_fastq_F{} -b $binSize -o $outputDir -e $errors -p -t $minReads"
You are using " and not ' so the variables should be substituted by the shell before GNU Parallel starts.
If the commands are run locally (i.e. not remote) you can use export VARIABLE.
If run on remote servers, use env_parallel:
env_parallel --session
alias myecho='echo aliases'
env_parallel -S server myecho ::: work
myfunc() { echo functions $*; }
env_parallel -S server myfunc ::: work
myvar=variables
env_parallel -S server echo '$myvar' ::: work
myarray=(arrays work, too)
env_parallel -k -S server echo '${myarray[{}]}' ::: 0 1 2
env_parallel --end-session

Automated script install using bash [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I'm trying to script install the below, how can I answer "y" at the prompt within the command
wget -O - mic.raspiaudio.com | sudo bash
I have tried the usual but this wont work
echo "y" | wget -O - mic.raspiaudio.com | sudo bash
Disclaimer: The solution below works for script that have a non-interactive switch.
I believe the echo won't work on this because it's not writing to the /dev/tty that the bash spawned. You can do it using the default feature bash provides.
From the man page:
-c If the -c option is present, then commands are read from the first
non-option argument command_string. If there are arguments after the
command_string, the first argument is assigned to $0 and any remaining
arguments are assigned to the positional parameters.
If you use -c option with bash, you can supply args to script that will run and those will be placed as mentioned in the man page. eg:
bash -c "script" "arg0" "arg1" .... The arg0 will be placed in $0 and arg1 will be placed in $1 and so on.
Now, I don't know if this can be generalized, but this solution will only work if there is a non-interactive mode in the script.
If you see the script it has the following function:
FORCE=$1
confirm() {
if [ "$FORCE" == '-y' ]; then
true
else
read -r -p "$1 [y/N] " response < /dev/tty
if [[ $response =~ ^(yes|y|Y)$ ]]; then
true
else
false
fi
fi
}
And is used as :
if confirm "Do you wish to continue"
then
echo "You are good to go"
fi
So, if we can set the $1 to "-y" it won't ask for a confirmation, We will try to do that same by:
$ bash -c "$( wget -qO - mic.raspiaudio.com)" "dummy" "-y"
This should work for the script, provided it does not have any other interactive options. I have not tested the original script by my own minimal script and it seems to work. eg:
$ bash -c "$(wget -qO - localhost:8080/test.sh)" "dummy" -y
You are good to go
$ bash -c "$(wget -qO - localhost:8080/test.sh)"
Do you wish to continue [y/N] y
You are good to go

curl: how to return 0 if status is 200? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
How can I return 0 when the response status is 200?
Right now I'm able to get the status, e.g. 200 with the following command:
curl -LI http://google.com -o /dev/null -w '%{http_code}\n' -s
But what I need to do is turn this 200 in a return 0.
How can I achieve this?
I tried the following command but it doesn't return:
if [$(curl -LI http://google.com -o /dev/null -w '%{http_code}\n' -s) == "200"]; then echo 0
You can also use the -f parameter:
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable scripts etc to better deal with failed attempts.
So:
curl -f -LI http://google.com
Will return status 0 if the call was sucessful.
Looks like you need some spaces and a fi. This works for me:
if [ $(curl -LI http://google.com -o /dev/null -w '%{http_code}\n' -s) == "200" ]; then echo 0; fi
The most simple way is to check for curl's exit code.
$ curl --fail -LI http://google.com -o /dev/null -w '%{http_code}\n' -s > /dev/null
$ echo $?
0
$ curl --fail -LI http://g234234oogle.com -o /dev/null -w '%{http_code}\n' -s > /dev/null
$ echo $?
6
Please note that --fail is neccessary here (details in this answer). Also note as pointed out by Bob in the comments (see footnote) that in case of a non-200 success code this will still return 0.
If you don't want to use that for whatever reason, here's the other approach:
http_code=$(curl -LI http://google.com -o /dev/null -w '%{http_code}\n' -s)
if [ ${http_code} -eq 200 ]; then
echo 0
fi
The reason your code isn't working is because you have to add spaces within the brackets.
(Copied from my answer on SuperUser where OP cross-posted the by now deleted question)
Another way is to use the boolean operator && :
[ $(curl -LI http://google.com -o /dev/null -w '%{http_code}\n' -s) == "200" ] && echo 0
The second command will be executed only if the first part is True.

How to use "curl -Is link | head -n 1" with many links in a file

i want to use this curl -Is link | head -n 1 to check if a link is alive ,but this is for one link ,anyone help me to check with a large number link in a file ,i has tried curl -Is Mylink.txt | head -n 1 ,but it does not achieve my goal .Thanks for any help .
Assuming your lines are well formatted (as curl likely needs them to be anyway) #codeforester's should work for you. As a matter of habit I try to avoid using cat and for loops to parse files though so I'd do it a little differently:
while read -r line; do
printf 'URL %s: %s\n' "$line" "$(curl -Is "$line" | head -n 1)"
done < Mylink.txt
This sounds like a job for xargs! Depending on what your file looks like that is. I'm assuming there is one link per line. Why are you using the -Is? Especially the -s for silent mode if you want to see errors? You may actually want the -s -S options together (link)
xargs < links.txt -I % sh -c 'curl -Is "$1" | head -n1' _ %
edited to reflect corrections in the comments, thanks!
for link in $(cat Mylink.txt)
do
echo $link: $(curl -Is $link | head -n 1)
done

Pass awk output as input to another script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Below I have tried to extract the pid of a running process to check its current ppid
ps -p 1111 -o ppid = $(ps -eo pid,args | awk '/PRD_/ && /startscen\.sh/ && $8 ~ /<string>/' | awk -F" " '{print $1}')
My script is wrong. Is there a better way?
does
ps -ef | awk '/PRD_/ && /startscen\.sh/ {print $3}'
give you what you want? That searchs for a process with "PRD_" and "startscen.sh" in the CMD string, and reports the PPID. If you want to run an additional ps to get info on the parent,
ps -p `ps -ef | awk '/PRD_/ && /startscen\.sh/ {print $3}'`
To check the ppid of a particular process, you use notation like this:
$ ps -p 1111 -o ppid=
This is close to what you've got, but note that the equals sign is immediately after ppid, with no space. This equals sign is not part of an expression, its function with the -o option is to specify custom headers. With this example I've included, you're telling ps to include no header at all, so the output will be just the ppid and nothing else. For example, try ps -p 1 -o ppid=Hello.
If I'm reading this correctly, the rest of your example code appears to be unrelated to your actual question, and is an attempt to collect the pid of perhaps a shell script that is running. So ... as an addendum to this answer, my advice on this point would be to modify the shell script so that it records its own somewhere, so you don't have to rely on parsing the process table to find it. For example, you could add something near the top of startscen.sh that looks like this:
pidfile="/var/run/`basename $0`.pid"
if [ -s "$pidfile" ]; then
if ps "$(cat $pidfile)" >/dev/null; then
echo "ERROR: already running, giving up." >&2
exit 1
fi
fi
trap "rm -f pidfile" 0 1 2 3 5 15
echo $$ > $pidfile
It's not perfect, but this will provide you with some protection against running the script twice at the same time, and will store the pid of the running script in a file where other pids live.

Resources