most popular tracks list using the Spotify API - spotify

How do I get a "global" top tracks list on Spotify using the Spotify API ?
What I mean is for example a list of the 20 most popular songs on Spotify now (for any artists/countries)
I already googled a lot and the only thing I could find is how to get a top tracks list for a specific artist which is not what I'm looking for at the moment.
Could anyone shed some light on it please ?

https://spotifycharts.com/ seems to have the data you are looking for.
Top right there is a link to download the chart as a csv. You can just point your code to the url for the csv for easier programmable access.
The problem that this is not exposed in the official API is being discussed in https://github.com/spotify/web-api/issues/33

You can also use the get-playlist endpoint to get the tracks of a playlist that has the most popular songs like Global Top 50:
curl -X "GET" "https://api.spotify.com/v1/playlists/37i9dQZEVXbMDoHDwVN2tF" -H "Accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer XXX"
See the API docs: https://developer.spotify.com/console/get-playlist/

You can use getCategoryPlaylists('toplists') method to get the top tracks. You can get the remaining categories by using the method getCategories. It will give you a list of categories they have such as Pop, rock and many more.
use this library to get all these functions.

Related

Gitlab expand calendar.json

I work since few years on a project. On my profile I only see the heatmap of the last 12 month. Is there a easy way to see the other past years?
The heatmap use these URL to read the data. Is it possible to use parameters?
https://<gitlabURL>/users/<username>/calendar.json
I am not aware of any params to receive prior activities with the calendar.json URL.
But with the events API you can get all activities of the past three years.
With the call below you receive all your events since 2018-09-01.
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/events?after=2018-09-01&scope=all"

Hosting an editable single string of text online?

I'm a twitch streamer and I'm runnig a bot named "Nightbot", which can interact with users in my stream's chat area. They can type a command such as "!hello" and, in response to that, I can tell the nightbot to load up a url, and post the text from that url into the chat.
But the text needs to change each time I play a new game, so the text must be editable. And it can't be a file, because the nightbot expects the url to return just plain text.
So I can't use a file hosting service. Please, don't recommend for me to save a text file to some free hosting service, and put my text into the file.
What I need is a very simple string of texxt that is hosted online, which can be edited, and which can be accessed by a url. Why the literal *eck is that so impossible or unreasonable? I thought we live in 2018.
I spent the entire day trying to learn Heroku, and when that turned out to be unreasonably complicated, I spent some hours trying Microsoft's Azure. Holy moly it turned into connecting storage services, picking price tiers, and do I want that to run on a windows or linux server? And how many gigs of space do I need, and will I be paying by the second? Come on I just need to save an editable string of text online, probably just 100 characters long! Why so difficult!
I guess what I'm looking for is something as easy as tinyurl, but for editable text strings online... just go there and type in the name for my variable, and boom, it gives me a url to update it, and a url to download it. Total time required: less than one minute.
WARNING: both solutions are publicly accessible and thus also editable. You don't want inapproriate text to display on your stream, so keep the link secret. Still there are no guarantees it stays secret.
Solution 1 (simple and editable via the web UI if you create an account)
You could just use pastebin.com. Here you can put public/unlisted text.
When you use the pastebin.com/raw/ + id of your text you get plain text.
Solution 2 (Bit more complicated, but more advanced)
You can use JSON Blob
This website allows you to host JSON and edit/create/get a string. It has to be valid JSON, but if you use "" around your text it is. Though if you use a curl command to change the text it doesn't have to be valid JSON. Only when you use the website to edit text it has to be.
First of you create your string and save it. Then you can access the string by doing a GET request on a url like this https://jsonblob.com/api/ + blob id
Example:
https://jsonblob.com/api/758d88a3-5e59-11e8-a54b-2b3610209abd
To edit your text you have to do a PUT request to the same url, but with the text your want it to change to.
Example command to change text (I used curl, because that's easy for me):
curl -i -X "PUT" -d 'This is new text' -H "Content-Type: application/json" -H "Accept: application/json" https://jsonblob.com/api/jsonBlob/758d88a3-5e59-11e8-a54b-2b3610209abd
You could also use a tool like POSTMAN to do the PUT request.
For more indepth instruction on how to use JSON Blob you can go to their website: https://jsonblob.com/api

How to use curl for non-interactive repetitive task - Downloading a sales CSV file 20 times per day

I see curl examples around on the Internet. But I'm just landing on tutorials on how to use Linux bash - curl to post messages to Facebook and other simpler starter help, but seeking more now.
I work in Operations for a marketing company. One of my jobs is to log into the sales website CRM (Customer Relationship Manager) system and download the orders each morning.
It takes about 25 mouse clicks to get the CSV sales downloaded for one product. there are dozens of products! As you might imagine, half my day is spent mouse clicking through the web based ordering system for hours. While the job security is nice, I'd like to get those steps automated so I can clear time for more server administration tasks.
Here is a process flow, exact steps for what the human operator must to to retrieve these sales orders:
Process flow:
log into https://www.the-sales-crm-example.com/admin/login.php
pass username password information
click Clients and Fulfillment (top nav bar)
dropdown to Prospects
click the 'arrow' for advanced searching
set date from: (yesterday example 07/10/2014
set date to: (today example 07/11/2014)
click the search button
700+ records (sales orders) found
screen only displays 10 at a time
click the 'show' triangle and dropdown to 1000
now all 700 records show
click the 'select all' box at the top
all 700 records now have checks in the boxes
click export CSV
The CSV file contains all 700 sales orders.
Basic things I've tried to get started.
Launch Google chrome, visit the sales website, and hit F12 to see source code.
Example website sales-whatever dot com
Source for login.php - look for Username and Password field in the code.
User/Pass looks like javascript embedded in the login.php file
It looks like 'admin_name" and "admin_pass" are variables I should be passing data to, am I right?
TRYING THIS
I'm already kinda falling down here, I'm not sure how I'd pass a username/password into the sales ordering website.
I've read about cookie jars, getting lost in YouTube curl tutorials:
curl --cookie-jar cjar --output /dev/null website dot com
curl --cookie cjar --cookie-jar cjar --data 'name=Chucky' --data 'pass=ZzChuckyZZ' --
location website dot com
Any front to back examples or help would be appreciated,
Thanks
Assuming you don't care if someone can sniff out your username and password from server logs, perhaps try curl <options> https://websitename.com?username=<username>&password=<password> for the login part. You might want to look into AutoIT to catch your keystrokes and automate the process.

Getting xml for a playlist - custom spotify playlist

I am looking to create a custom spotify playlist rather than use the generator via the website. I need a way of grabbing this xml, rather like the lookup and search facilities that the webAPI provide. I have tried to use a playlist spotifyURI with the lookup functionality but it doesnt seem to work.
e.g.
http://ws.spotify.com/lookup/1/?uri=spotify:user:XXX:playlist:YYY
However, using this just gives me the following error :
"You hit the rate limit, wait 10 seconds and try again"
I don't think I have really hit the hitrate, I only tried it a few times.
If this isnt the way to go, what other options are there ? libSpotify ? This seems like rather a bigger solution for just getting some xml for a playlist.
Any help appreciated.
The web API doesn't support playlist lookup at all. If you want to find playlist data, you'd have to use libspotify.

cURL - scanning a website's source

I was trying to use the program cURL inside of BASH to download a webpage's source code. I am having difficulty when trying to download a page's code when the page is using more complex encoding than simple HTML. For example I am trying to view the following page's source code with the following command:
curl "http://shop.sprint.com/NASApp/onlinestore/en/Action/DisplayPhones?INTNAV=ATG:HE:Phones"
However the result of this doesn't match the source code generated by Firefox when I click "View source". I believe it is because there are Javascript elements on the page, but I can not be sure.
For example, I can not do:
curl "http://shop.sprint.com/NASApp/onlinestore/en/Action/DisplayPhones?INTNAV=ATG:HE:Phones" | grep "Access to 4G speeds"
Even though that phrase is clearly found in the Firefox source. I tried looking through the man pages but I don't know enough about the problem to figure out a possible solution.
A preferable answer will include why this is not working the way I expect it to and a solution to the issue using curl or another tool executable from a Linux box.
EDIT:
Upon suggestion below I have also included a useragent switch with no success:
curl "http://shop.sprint.com/NASApp/onlinestore/en/Action/DisplayPhones?INTNAV=ATG:HE:Phones" -A "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100423 Ubuntu/10.04 (lucid) Firefox/3.6.3" | grep -i "Sorry"
I don't see the "Access to 4G speed" thing in the first place when I go to that page.
The two most likely culprits for this difference are cookies and your user-agent.
You can specify cookies manually using both curl or wget. Dump out your cookies from Firefox using whatever plugins you want, or just
javascript:prompt('',document.cookie);
in your location bar
Then stick read through the man pages for wget or curl and see how to include that cookie.
EDIT:
It appears to be what I thought, a missing cookie.
curl --cookie "INSERT THE COOKIE YOU GOT HERE" http://shop.sprint.com/NASApp/onlinestore/en/Action/DisplayPhones?INTNAV=ATG:HE:Phones | grep "Access to 4G"
As stated above, you can grab whatever you cookie is from above: javascript:prompt('',document.cookie) then copy the default text that comes up. Make sure you're on the sprint page when you stick that in the location bar (otherwise you'll end up with the wrong website's cookie)
EDIT 2
The reason your browser cookie and your shell cookie were different was the different in interaction that took place.
The reason I didn't see the Access to 4G speed thing you were talking about in the first place was that I hadn't entered my zip code.
If you want to have a constantly relevant cookie, you can force curl to do whatever is required to obtain that cookie, in this case, entering a zip code.
In curl, you can do this with multiple requests and holding the retrieved cookies in a cookie jar:
[stackoverflow] curl --help | grep cookie
-b/--cookie <name=string/file> Cookie string or file to read cookies from (H)
-c/--cookie-jar <file> Write cookies to this file after operation (H)
-j/--junk-session-cookies Ignore session cookies read from file (H)
So simply specify a cookie jar, send the request to send the zipcode, then work away.
If you are getting different source code from the same source the server is, most likelly sniffing your user agent and laying out specific code.
Javascript can act on the DOM and do all sorts of things but if you use 'see source' the code will be exactly the same as the one your browser first read (before DOM manipulation).

Resources