how to scrape the binance price in bash - linux

I'm trying to scrape the binance price
I play arround with.
price1=$(echo -s https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC | grep -o 'price":"[^"]*' | cut -d\" -f3)
echo $price1
I got the price but also an error like:
line 15: https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC:
No such file or directory
someone can explain me how to use it correctly maybe
finally I like to have it in dollar

echo -s doesn't do anything special on my Linux. It just prints -s.
Use curl to download the data and jq to process it.
It is as simple as:
curl -s 'https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC' | jq -r .price
The arguments of jq:
.price is the price property of the current object (.).
-r tells it to return raw data; the value of .price is a string in the JSON downloaded from the URL.

Related

Download latest version of Slack automatically

I have a script that I download slack with the wget command, as the script runs every time a computer is configured I need to always download the latest version of slack.
i work in debian9
I'm doing it right now:
wget https://downloads.slack-edge.com/linux_releases/slack-desktop-3.3.7-amd64.deb
and I tried this:
curl -s https://slack.com/intl/es/release-notes/linux | grep "<h2>Slack" | head -1 | sed 's/[<h2>/]//g' | sed 's/[a-z A-Z]//g' | sed "s/ //g"
this return: 3.3.7
add this to: wget https://downloads.slack-edge.com/linux_releases/slack-desktop-$curl-amd64.deb
and not working.
Do you know why this can not work?
Your script produces a long string with a lot of leading whitespace.
bash$ curl -s https://slack.com/intl/es/release-notes/linux |
> grep "<h2>Slack" | head -1 |
> sed 's/[<h2>/]//g' | sed 's/[a-z A-Z]//g' | sed "s/ //g"
3.3.7
You want the string without spaces, and the fugly long pipeline can be simplified significantly.
barh$ curl -s https://slack.com/intl/es/release-notes/linux |
> sed -n "/^.*<h2>Slack /{;s///;s/[^0-9.].*//p;q;}"
3.3.7
Notice also that the character class [<h2>/] doesn't mean at all what you think. It matches a single character which is < or h or 2 or > or / regardless of context. So for example, if the current version number were to contain the digit 2, you would zap that too.
Scraping like this is very brittle, though. I notice that if I change the /es/ in the URL to /en/ I get no output at all. Perhaps you can find a better way to obtain the newest version (using apt should allow you to install the newest version without any scripting on your side).
echo wget "https://downloads.slack-edge.com/linux_releases/slack-desktop-$(curl -s "https://slack.com/intl/es/release-notes/linux" | xmllint --html --xpath '//h2' - 2>/dev/null | head -n1 | sed 's/<h2>//;s#</h2>##;s/Slack //')-amd64.deb"
will output:
wget https://downloads.slack-edge.com/linux_releases/slack-desktop-3.3.7-amd64.deb
I used xmllint to parse the html and extract the first part between <h2> tags. Then some removing with sed and I receive the newest version.
#edit:
On noticing, that you could just grep <h2> from the site to get the version, you can the version with just:
curl -s "https://slack.com/intl/es/release-notes/linux" | grep -m1 "<h2>" | cut -d' ' -f2 | cut -d'<' -f1

Searching for a specifc given hex string in multiple images on Linux

I am doing some research on image processing and I wanted to know if its possible to search for a specific hex string/byte array in various images. It would be great if it gives me a list of images that has that specific string. Basically what grep -r "" does. For some reason grep doesn't do the job. I am not familiar with strings. I did had a look at "man strings" but it didn't help much. Anyways, I would like to look for a specific hex string say "0002131230443" (or even specific byte array i.e. base64 string) in several images in the same folder. Any help would be highly appreciated.
I found this code which exactly does what I want using xxd and grep. Please find the command below:
xxd -p /your/file | tr -d '\n' | grep -c '22081b00081f091d2733170d123f3114'
FYI:
It'll return 1 if the content matches, 0 else.
xxd -p converts the file to plain hex dump, tr -d '\n' removes the newlines added by xxd, and grep -c counts the number of lines matched.
Does anyone know how to run the code above in a specific directory using bash script. I have around 400 images and I want it to return only 400 (i.e. a string matching count) if all the 400 images have that particular string. I found this script code below but it runs the same code over and over for 400 times returning either 0 or 1 each time:
#!/bin/bash
FILES=/FILEPATH/*
for f in $FILES
do
echo "PROCESSING $f FILES"
echo "-------------------"
XXD -u $f | grep ABCD
echo "-------------------"
done
Thanks guys.
Plasma33
With GNU grep:
#!/bin/bash
files=/FILEPATH/*
for f in $files
do
grep -m 1 -P '\x22\x08\x1b\x00' < "$f"
done | wc -l

downloading images from links as column values in a csv file in linux/unix

I have a temp.csv file that has 4 columns and plenty of rows. The column0 has a link that are images from the internet like 'www.abc.com/one.jpg' and so on. I usually download any link using the following wget command for any single link:
wget http://www.sample.com/temp.jpg -O /home/tempfolder/
Is there any way I can use or extend the wget command to download all of the links listed under the column0 of my csv file and save it to a folder ?
I tried this out - wget is unable to save the files. However, here's a fix:
cut -f1 -d, filename | while read url; do wget ${url} -O /home/tempfolder/$(basename ${url}); done
I hope this helps.
Just make sure you run this script in the same directory as the CSV_FILE or provide a full path to this file.
for link in `cat CSV_FILE | cut -d, -f1`
do
wget $link -O /home/tempfolder/
done
EDIT: You asked me to elaborate. This is a for loop that iterates over each link in that file. The cat CSV | cut -d, -f1 extracts only the column that holds the links. The for loop iterates over all these links and one by one places them in the variable named link. Upon each iteration we perform a wget using that link variable. You can either run this on command line, or create a file, add this line at the top: #!/bin/sh, and run it using ./file_name. I hope this is detailed enough.
cut -f1 -d, filename | while read url; do wget $url -O /home/tempfolder; done
The command:
cut -f1 -d, filename
"Cuts" field 1 (-f1) of lines delimited by commas (-d,) from the specified filename.
We then pipe that to:
while read url
Which reads each line coming from cut into the variable url.
Then we wget the specified url.
Edit: To fix your permissions problems:
pushd /home/tempfolder ; cut -f1 -d, filename | while read url; do wget $url; done; popd

Ambiguous Redirection on shell script

I was trying to create a little shell script that allowed me to check the transfer progress when copying large files from my laptop's hdd to an external drive.
From the command line this is a simple feat using pv and simple redirection, although the line is rather long and you must know the file size (which is why I wanted the script):
console: du filename (to get the exact file size)
console: cat filename | pv -s FILE_SIZE -e -r -p > dest_path/filename
On my shell script I added egrep "[0-9]{1,}" -o to strip the filename and keep just the size numbers from the return value of du, and the rest should be straightforward.
#!/bin/bash
du $1 | egrep "[0-9]{1,}" -o
sudo cat $1 | pv -s $? -e -r -p > $2/$1
The problem is when I try to copy file12345.mp3 using this I get an ambiguous redirection error because egrep is getting the 12345 from the filename, but I just want the size.
Which means the return value from the first line is actually:
FILE_SIZE
12345
which bugs it.
How should I modify this script to parse just the first numbers until the first " " (space)?
Thanks in advance.
If I understand you correctly:
To retain only the filesize from the du command output:
du $1 | awk '{print $1}'
(assuming the 1st field is the size of the file)
Add double quotes to your redirection to avoid the error:
sudo cat $1 | pv -s $? -e -r -p > "$2/$1"
This quoting is done since your $2 contains spaces.

Extract multiple substrings in bash

I have a page exported from a wiki and I would like to find all the links on that page using bash. All the links on that page are in the form [wiki:<page_name>]. I have a script that does:
...
# First search for the links to the pages
search=`grep '\[wiki:' pages/*`
# Check is our search turned up anything
if [ -n "$search" ]; then
# Now, we want to cut out the page name and find unique listings
uniquePages=`echo "$search" | cut -d'[' -f 2 | cut -d']' -f 1 | cut -d':' -f2 | cut -d' ' -f 1 | sort -u`
....
However, when presented with a grep result with multiple [wiki: text in it, it only pulls the last one and not any others. For example if $search is:
Before starting the configuration, all the required libraries must be installed to be detected by Cmake. If you have missed this step, see the [wiki:CT/Checklist/Libraries "Libr By pressing [t] you can switch to advanced mode screen with more details. The 5 pages are available [wiki:CT/Checklist/Cmake/advanced_mode here]. To obtain information about ea - '''Installation of Cantera''': If Cantera has not been correctly installed or if you do not have sourced the setup file '''~/setup_cantera''' you should receive the following message. Refer to the [wiki:CT/FormulationCantera "Cantera installation"] page to fix this problem. You can set the Cantera options to OFF if you plan to use built-in transport, thermodynamics and chemistry.
then it only returns CT/FormulationCantera and it doesn't give me any of the other links. I know this is due to using cut so I need a replacement for the $uniquepages line.
Does anybody have any suggestions in bash? It can use sed or perl if needed, but I'm hoping for a one-liner to extract a list of page names if at all possible.
egrep -o '\[wiki:[^]]*]' pages/* | sed 's/\[wiki://;s/]//' | sort -u
upd. to remove all after space without cut
egrep -o '\[wiki:[^]]*]' pages/* | sed 's/\[wiki://;s/]//;s/ .*//' | sort -u

Resources