Gitlab expand calendar.json - gitlab

I work since few years on a project. On my profile I only see the heatmap of the last 12 month. Is there a easy way to see the other past years?
The heatmap use these URL to read the data. Is it possible to use parameters?
https://<gitlabURL>/users/<username>/calendar.json

I am not aware of any params to receive prior activities with the calendar.json URL.
But with the events API you can get all activities of the past three years.
With the call below you receive all your events since 2018-09-01.
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/events?after=2018-09-01&scope=all"

Related

Gitlab container registry tag expiration policy not working - regex format? schedule?

I'm trying the Gitlab "CI/CD > Container Registry tag expiration policy" setting, and so far it's not deleting anything. We use semantic versioning (with a "v" prepended), and my goal is to automatically delete old "patch" releases:
Keep all major & minor tags: vM.0.0 and vM.m.0.
Delete all but a few recent tags matching vM.m.p (where p is not zero)
Given that I'm enabling this new setting on an old project, it's risky to find my answer by experimentation. Even on a new project, experimenting would take a lot of effort and calendar time. A dry-run or preview option would be really nice here, so I could try out the settings without fear of deleting important tags.
I tried the following "expire" regex: v[1-9][0-9]*[.][0-9][0-9]*[.][1-9][0-9]*. So far it hasn't had any effect. Which leaves me wondering:
How often does this run? Do I just need to wait longer?
Am I mis-understanding the way this setting works?
Is my regex bad?
What regex format is expected, even?
A more complex example in the UI would be nice. https://gitlab.com/gitlab-org/gitlab/-/issues/214007#note_322637771 mentioned that, but was closed without addressing that point.
Is there any way to see feedback on this cleanup, like maybe in the project activity log?
My current approach is to tweak this setting once a day, then check my tags list the next day to see if it had any effect.
I'd appreciate general advice for verifying/troubleshooting this setting, and/or specific suggestions for how to match my particular version scheme.
Here's a screenshot of my current settings:
I eventually gave up on this and took a different approach. Probably the most frustrating part was wondering when it runs. Is it once a day at a regular time? A random time each day? Once after every push to the registry? I was never sure how long to wait and see if my settings changes made a difference.
Instead I found an API method that exposes all of the same options. I actually like the API better than a project setting
I can see more clearly how & when it runs.
I can see error messages and results.
I can track the config in git, in .gitlab-ci.yml, rather than having to document a separate project setting.
https://docs.gitlab.com/ee/api/container_registry.html#delete-registry-repository-tags-in-bulk gives an overview and some example curls. Here's how I added it to my pipeline:
# In before_script:
- apk --update add curl
...
# In the job script:
# Get registry id. Assumptions: valid response, "id" is first field, and project only has one registry.
- REGISTRY_ID=`curl --header "PRIVATE-TOKEN:$API_TOKEN" "https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/registry/repositories" | cut -d, -f1 | cut -f2 -d:`
- TAGS_URL=https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/registry/repositories/$REGISTRY_ID/tags
- curl --request DELETE --data 'keep_n=10' --data 'older_than=1week' --data "name_regex_delete=v[0-9][.].*" --data "name_regex_keep=.*[.]0" --header "PRIVATE-TOKEN:$API_TOKEN" "$TAGS_URL"
Using the API, I was able to quickly figure out which regex patterns worked. It's not immediate, but it seems to take effect within a minute. I'm assuming I could take those same regexes and use them in the project settings, but I'm happier sticking with the API for now.

How to generate chart on serverside with nodejs?

I have a Bot for a personality test. this bot getting(yes/no) answers by asking over 60 questions. after summarizing the results it will give 6 value for indicated indexes. I had to generate a Radar chart with legends and values based on summery and post it back (jpg/png/svg) to user by Bot.
Any one know how can I do that, Any guideline will be helpful.
You can use chartjs-node to generate chart and convert into image in server side in nodejs.
You can use the svg-radar-chart package accessible using the NuGet packages.

Cannot use the Knowledge academic API

I have a problem when I try to use the function similarity proposed in the academic knowledge API.
I tested the following commad to compute the similarity between two string:
curl -v -X GET "https://api.labs.cognitive.microsoft.com/academic/v1.0/similarity?s1={string}&s2={string}" -H "Ocp-Apim-Subscription-Key: {subscription key}"
The error that I get is :
{"error":{"code":"Unspecified","message":"Access denied due to invalid
subscript ion key. Make sure you are subscribed to an API you are
trying to call and provi de the right key."}}
Curl_http_done: called premature == 0
Connection #0 to host (nil) left intact
Can you tell how can I generate the Ocp-Apim-Subscription-Key?
At the moment I used the key generated automatically when I visit the following url : https://labs.cognitive.microsoft.com/en-us/subscriptions?productId=/products/5636d970e597ed0690ac1b3f&source=labs
Thank you for your help
Unfortunately, primarily not an answer to your question, but rather a warning for all with the "same" problem, who could came across the original question like me, as the question helped me to solve a very, very similar problem: check whether you are using api.labs.cognitive.microsoft.com instead of westus.api.cognitive.microsoft.com. But may be you need the opposite.
It seems the whole project has been moved inside Microsoft (see https://www.microsoft.com/en-us/research/project/academic/articles/sign-academic-knowledge-api/, I would bet that this blogpost was at the top of some "entrypoint" blog even yesterday morning, but now I am not able to find this blog, perhaps the things are changing right now) and may be the project is somewhere in the middle of the transition process and not all documentation etc. corresponds with the new state. E.g. https://learn.microsoft.com/en-us/azure/cognitive-services/academic-knowledge/home, in the submenu Reference, links to two "versions" of API which seem to be almost same except for the URLs westus.api... and api.labs..., respectively. But there seem to be no info what is the difference, which one should be preferred etc.
My original keys expired yesterday, thus I generated new ones and was not able to use them until I have changed the URL to api.labs..., thanks to your question. May be you have the opposite problem, that you still have the "old" keys, so you need to use the "old" url westus.api..., but I am not able to test it, as my original keys which worked with westus.api... are expired.
Both your query and your link where to get keys are OK and work for me. Just one additional detail: did you try the circle arrow next to the key value, which generates a new key? May be your key is somehow broken or expired and this could solve your problem. You can also try to create a completely new account at MS site.
PS: I have added microsoft-cognitive tag as MS refers to https://stackoverflow.com/questions/tagged/microsoft-cognitive from many pages related to Cognitive Services
I think you need to sign up for a free account, there is a link you can follow from here:
https://westus.dev.cognitive.microsoft.com/docs/services/56332331778daf02acc0a50b/operations/58076bdadcf4c40708f83791
Except for the invalid key, you curl-call looks right.
You need a valid subscription key to be able to make API calls.
Production key
Have a look at this page on how to created the needed services in the Azure portal and how to find the endpoint, as well as they key from there.
Trial key
However, if you just want to try out the service, you can create a temporary key here. This key is very limited in use but it should get you up and running.
Limitations are:
50,000 transactions per month, up to 20 per second.
Trial keys expire after a 90 day period.

most popular tracks list using the Spotify API

How do I get a "global" top tracks list on Spotify using the Spotify API ?
What I mean is for example a list of the 20 most popular songs on Spotify now (for any artists/countries)
I already googled a lot and the only thing I could find is how to get a top tracks list for a specific artist which is not what I'm looking for at the moment.
Could anyone shed some light on it please ?
https://spotifycharts.com/ seems to have the data you are looking for.
Top right there is a link to download the chart as a csv. You can just point your code to the url for the csv for easier programmable access.
The problem that this is not exposed in the official API is being discussed in https://github.com/spotify/web-api/issues/33
You can also use the get-playlist endpoint to get the tracks of a playlist that has the most popular songs like Global Top 50:
curl -X "GET" "https://api.spotify.com/v1/playlists/37i9dQZEVXbMDoHDwVN2tF" -H "Accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer XXX"
See the API docs: https://developer.spotify.com/console/get-playlist/
You can use getCategoryPlaylists('toplists') method to get the top tracks. You can get the remaining categories by using the method getCategories. It will give you a list of categories they have such as Pop, rock and many more.
use this library to get all these functions.

How to use curl for non-interactive repetitive task - Downloading a sales CSV file 20 times per day

I see curl examples around on the Internet. But I'm just landing on tutorials on how to use Linux bash - curl to post messages to Facebook and other simpler starter help, but seeking more now.
I work in Operations for a marketing company. One of my jobs is to log into the sales website CRM (Customer Relationship Manager) system and download the orders each morning.
It takes about 25 mouse clicks to get the CSV sales downloaded for one product. there are dozens of products! As you might imagine, half my day is spent mouse clicking through the web based ordering system for hours. While the job security is nice, I'd like to get those steps automated so I can clear time for more server administration tasks.
Here is a process flow, exact steps for what the human operator must to to retrieve these sales orders:
Process flow:
log into https://www.the-sales-crm-example.com/admin/login.php
pass username password information
click Clients and Fulfillment (top nav bar)
dropdown to Prospects
click the 'arrow' for advanced searching
set date from: (yesterday example 07/10/2014
set date to: (today example 07/11/2014)
click the search button
700+ records (sales orders) found
screen only displays 10 at a time
click the 'show' triangle and dropdown to 1000
now all 700 records show
click the 'select all' box at the top
all 700 records now have checks in the boxes
click export CSV
The CSV file contains all 700 sales orders.
Basic things I've tried to get started.
Launch Google chrome, visit the sales website, and hit F12 to see source code.
Example website sales-whatever dot com
Source for login.php - look for Username and Password field in the code.
User/Pass looks like javascript embedded in the login.php file
It looks like 'admin_name" and "admin_pass" are variables I should be passing data to, am I right?
TRYING THIS
I'm already kinda falling down here, I'm not sure how I'd pass a username/password into the sales ordering website.
I've read about cookie jars, getting lost in YouTube curl tutorials:
curl --cookie-jar cjar --output /dev/null website dot com
curl --cookie cjar --cookie-jar cjar --data 'name=Chucky' --data 'pass=ZzChuckyZZ' --
location website dot com
Any front to back examples or help would be appreciated,
Thanks
Assuming you don't care if someone can sniff out your username and password from server logs, perhaps try curl <options> https://websitename.com?username=<username>&password=<password> for the login part. You might want to look into AutoIT to catch your keystrokes and automate the process.

Resources