trying to list songs in a playlist using web API I get 108 songs (?).
I see some 100 limit in the playlist.
debian:~/user$ curl -Ss -H "Authorization: Bearer atkn" https://api.spotify.com/v1/users/bugenwilla/playlists|jq -r '.items[]|select(.name=="myplaylist")|[.tracks.total, .id]'
[
1543,
"playlistid"
]
debian:~/user$ curl -Ss -H "Authorization: Bearer atkn" https://api.spotify.com/v1/users/username/playlists/playlistid/tracks | jq -r '.items[] | .track.album.artists[].name' | wc -l
108
debian:~/user$ curl -Ss -H "Authorization: Bearer atkn" https://api.spotify.com/v1/users/username/playlists/playlistid/tracks | jq -r '.limit, .total'
100
1543
1) why it shows 108 songs and not limit 100?
2) is there a way to change the limit / show all 1543 songs in the playlist?
With that jq expression, you are counting 108 album artists, not tracks. There are probably a few tracks that are on an album that is made by more than one artist.
The documentation for the api method you call https://developer.spotify.com/web-api/get-playlists-tracks/ mentions a query parameter called offset. It defaults to 0. Setting it to 100, will give you the next hundred tracks. https://api.spotify.com/v1/users/username/playlists/playlistid/tracks?offset=100
Related
My Linux bash skills are rudimentary so I am seeking some help. I am attempting to insert some CPU temperature data into an influx database so that it can be displayed on a Grafana dashboard.
So far I have been able to retrieve the CPU temperatures via ipmitool in Linux, see below example... which shows the command I run, and the resulting temperature figures.
ipmitool sdr elist full | grep CPU | grep Temp | cut -d "|" -f 5 | cut -d " " -f 2
40
47
I want to feed these numbers into variables so I can insert them into the influx database using a command something like below.
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'server_temps,sensor=CPU1, value=23'
The command above works manually, and inserts the value 23 into the influx database. What I'd like to do is automate the data collection and insertion into the influx database... but for both CPU1 and CPU2.
Initially I am thinking I may need several curl sections, to enable the temperatures for CPU1 and CPU2 to be added to the database every 30 seconds or so. If I can get this to work it is possible I will want to add additional data from ipmitool.
I suspect I may not be doing this the best method? So all ideas, help, very much appreciated.
Would using simple bash loop sufficient?
for temperature in $(ipmitool sdr elist full | grep CPU | grep Temp | cut -d "|" -f 5 | cut -d " " -f 2)
do
curl -i -XPOST "http://localhost:8086/write?db=mydb' --data-binary 'server_temps,sensor=inlet, value=$temperature"
done
I'm trying to scrape the binance price
I play arround with.
price1=$(echo -s https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC | grep -o 'price":"[^"]*' | cut -d\" -f3)
echo $price1
I got the price but also an error like:
line 15: https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC:
No such file or directory
someone can explain me how to use it correctly maybe
finally I like to have it in dollar
echo -s doesn't do anything special on my Linux. It just prints -s.
Use curl to download the data and jq to process it.
It is as simple as:
curl -s 'https://api.binance.com/api/v3/ticker/price?symbol=ETHBTC' | jq -r .price
The arguments of jq:
.price is the price property of the current object (.).
-r tells it to return raw data; the value of .price is a string in the JSON downloaded from the URL.
I want to get the short hash/sha of a GitHub commit, is there a way to get the short hash using GitHub API?
I was not able to find out anything on the official documentation page.
This trick did it for me:
curl -s -L https://api.github.com/repos/:ORG/:REPO/git/refs/heads/master | grep sha | cut -d '"' -f 4 | cut -c 1-7
I was looking for a script to create a URL list for a sitemap and found this one:
wget --spider --force-html -r -l1 http://sld.tld 2>&1 \
| grep '^--' | awk '{ print $3 }' \
| grep -v '\.\(css\|js\|png\|gif\|jpg\|ico\|txt\)$' \
> urllist.txt
The result is:
http://sld.tld/
http://sld.tld/
http://sld.tld/home.html
http://sld.tld/home.html
http://sld.tld/news.html
http://sld.tld/news.html
...
Every URL entry is saved twice. How should the script be changed to fix this?
If you look at the output of wget when you use the --spider flag, it'll look something like:
Spider mode enabled. Check if remote file exists.
--2013-04-12 22:01:03-- http://www.google.com/intl/en/about/products/
Connecting to www.google.com|173.194.75.103|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Remote file exists and could contain links to other resources -- retrieving.
--2013-04-12 22:01:03-- http://www.google.com/intl/en/about/products/
Reusing existing connection to www.google.com:80.
HTTP request sent, awaiting response... 200 OK
It checks if the link is there (thus prints out a --), then it has to download it to look for additional links (thus the second --). This is why it shows up (at least twice) when you use --spider.
Compare that to without --spider:
Location: http://www.google.com/intl/en/about/products/ [following]
--2013-04-12 22:00:49-- http://www.google.com/intl/en/about/products/
Reusing existing connection to www.google.com:80.
So you only get one line that starts with --.
You can remove the --spider flag but you could still get duplicates. If you really don't want any duplicates, add a | sort | uniq to your command:
wget --spider --force-html -r -l1 http://sld.tld 2>&1 \
| grep '^--' | awk '{ print $3 }' \
| grep -v '\.\(css\|js\|png\|gif\|jpg\|ico\|txt\)$' \
| sort | uniq > urllist.txt
I am currently writing a bash script and I'm using curl. What I want to do is get one specific header of a response.
Basically I want this command to work:
curl -I -w "%{etag}" "server/some/resource"
Unfortunately it seems as if the -w, --write-out option only has a set of variables it supports and can not print any header that is part of the response. Do I need to parse the curl output myself to get the ETag value or is there a way to make curl print the value of a specific header?
Obviously something like
curl -sSI "server/some/resource" | grep 'ETag:' | sed -r 's/.*"(.*)".*/\1/'
does the trick, but it would be nicer to have curl filter the header.
The variables specified for "-w" are not directly connected to the http header.
So it looks like you have to "parse" them on your own:
curl -I "server/some/resource" | grep -Fi etag
You can print a specific header with a single sed or awk command, but HTTP headers use CRLF line endings.
curl -sI stackoverflow.com | tr -d '\r' | sed -En 's/^Content-Type: (.*)/\1/p'
With awk you can add FS=": " if the values contain spaces:
awk 'BEGIN {FS=": "}/^Content-Type/{print $2}'
The other answers use the -I option and parse the output. It's worth noting that -I changes the HTTP method to HEAD. (The long opt version of -I is --head). Depending on the field you're after and the behaviour of the web server, this may be a distinction without a difference. Headers like Content-Length may be different between HEAD and GET. Use the -X option to force the desired HTTP method and still only see the headers as the response.
curl -sI http://ifconfig.co/json | awk -v FS=": " '/^Content-Length/{print $2}'
18
curl -X GET -sI http://ifconfig.co/json | awk -v FS=": " '/^Content-Length/{print $2}'
302