How to get google clould talent job list? - node.js

How to get list of job from google talent-solution?
I have also refer this : https://cloud.google.com/talent-solution/job-search/docs/before-you-begin
https://jobs.googleapis.com/$discovery/rest?version=v3
but can't understand, How to get JSON response for job listing.
Please provide proper step for access data like image.

The Node.js SDK looks pretty straight forward. Try it out
https://cloud.google.com/talent-solution/job-search/v2/docs/libraries#client-libraries-resources-nodejs
For getting the list of jobs, you can try this
https://cloud.google.com/talent-solution/job-search/v2/docs/reference/rest/v2/jobs/list#google.jobs.v2.JobService.ListJobs
Thanks

Related

Google Drive API v3 using Python

Can anybody give me sample code in python to find the folder Id (especially the last folder created) in google drive? Your help will be immensely appreciated.
Stackoverflow and the Drive API documentation have enough samples of python code for Google Drive API requests, you just need to define the basic steps and patch the corresponding code parts together
Any Google Drive API request need to be based on the Drive API Quickstart for Python which implements OAuth2 authorization flow and creation of an authenticated service.
Once you have this, you can list your files to retrieve their Ids.
In order to narrow down the results, you can define the search parameter q, e.g. specifying mimeType = 'application/vnd.google-apps.folder'.
Use the parameter orderBy to request that the most recent modified folders will be shown first.
Use the paramter pageSize to define how many results you want to obtain (if you want to obtain the newst folder Id only - 1 is a valid value).
Stackoverflow is not meant to help you write a code from scratch, I recommend you to search e.g. with the following specifications for similar questions and try to patch together your code yourself.
Then, if necessary, post a new question with your code explaining where you got stuck and asking for specific help.
Hint: Before implementing your request into Python, test it with the "Try this API" functionality of the Files:
list to make sure that you adapted the parameters correctly to your needs

Pushing documents(blobs) for indexing - Azure Search

I've been working in Azure Search + Azure Blob Storage for while, and I'm getting trouble indexing the incremental changes for new files uploaded.
How can I refresh the index after upload a new file into my blob container? Following my steps after upload file(I'm using rest service to perform these actions): I'm using the Microsoft Azure Storage Explorer [link].
Through this App I've uploaded my new file to a folder already created before. After that, I used the Http REST to perform a 'Run' indexer command, you can see in this [link].
The indexer shows me that my new file was successfully added, but when I go to search the content in this new file is not found.
Please, anybody knows how to add this new file in Index and also how to find this new file by searching for his content?
I'm following Microsoft tutorials, but for this issue, I couldn't find a solution.
Thanks, guys!
Assuming everything is set up correctly, you don't need to do anything special - new blobs will be picked up and indexed the next time indexer runs according to its schedule, or you run the indexer on demand.
However, when you run the indexer on demand, successful completion of the Run Indexer API means that the request to run the indexer has been submitted; it does not mean that the indexer has finished running. To determine when the indexer has actually finished running (and observe the errors, if any), you should use Indexer Status API.
If you still have questions, please let us know your service name and indexer name and we can take a closer look at the telemetry.
I'll try to describe how can I figured out this issue.
Firstly, I've created a DataSource through this command:
POST https://[service name].search.windows.net/datasources?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-data-source.
Secondly, I created the Index:
POST https://[servicename].search.windows.net/indexes?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-index
Finally, I created the Indexer. The problem happened at this moment because it is where all configurations are setted.
POST https://[service name].search.windows.net/indexers?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-indexer
After all these things done. The Index starts indexing all contents automatically (once we have contents into blob storage).
The crucial thing comes now. while your index is trying to extract all 'text' into your files, could occur some issue when the type of file is not 'indexable'. For example, there are two properties that you must pay attention excluded extensions, indexed extensions.
If you don't write the types properly, the Index throws an exception. Then, The Feedback Message(in my opinion is not good, was like a 'miss lead') says to avoid this error you should set the Indexer to '"dataToExtract" : "storageMetadata"'.
This command means that you are trying just index the metadata and no more the content of your files, then you cannot search by this and retrieve.
After that, the same message at the bottom says to avoid these issue you should set two properties (who solved the problem)
"failOnUnprocessableDocument" : false,"failOnUnsupportedContentType" : false
In addition, now everything is working properly. I appreciate your help #Eugene Shvets, and I hope this could be useful for someone else.

I need to scrape all the analytics from a Flurry account

Right now, the only project I can see that does this is
https://github.com/lucamartinetti/flurry-scraper
...but it currently is not logging in properly, I suspect that this is do to the fact that Flurry has made changes to their API which result in the login not working anymore...
I tried messing with it, but am unable to get it to work.
Can anyone help me, or point me in the direction of a project that will do this? I want to scrape all the data possible and download it.
Any help would be appreciated.
Thanks,
-Mark
You don't need to scrape the website if all you want is analytics metrics of your app and you have the API key.
You just need to access this data using Flurry's reporting APIs.
For instance, you can make a REST call to the AppMetrics API and it would give you data about about your apps' users, sessions, pageviews, etc in XML or JSON. A simple AppMetrics call would be of the form:
http://api.flurry.com/appMetrics/METRIC_NAME?apiAccessCode=APIACCESSCODE&apiKey=APIKEY&startDate=STARTDATE&endDate=ENDDATE&country=COUNTRY&versionName=VERSIONNAME&groupBy=GROUPBY

How to get a published video URL

I'm trying to build my app based on Windows Media Services REST API (i'm not using any SDK, just plain rest api requests).
My steps are more or less like:
Create Asset
Assign write Access Policy
Assign upload locator
Upload a file to URL specified by upload locator path
Assign download Access Policy
Assign download locator
All those steps seems to work great but - how can i actually get the video streaming URL? I can't see anything, which looks like such url (as far as i know, it should look similar to upload URL from upload locator). Should i "fold" it by myself using some segments from various parts of api ?
Based on this article, i should append the path parameter with name parameter and /manifest (so it should looke like that: <path_param>/<name_param>.ism/manifest) but it gives me ResourceNotFound error. Anyway, i've seen that other people (like SHIBSANKAR) have found some way to obtain all asset urls so i think there is some way to do it but they have not descrbed how they've done it.
After reading all the docs and talking with microsoft support, i have figured it out. All the URL parts are returned by download locator and the formula looks like that:
#{BaseUri}/video.mp4#{ContentAccessComponent}
I hope someone will find that usefull someday.

Download images containing a specific tag with likes from Instagram

I would like to download images with a certain tag from Instagram with their likes. With this post I hope to get some advice or tips on how to do this. I have no experience with web scraping related stuff or web API usages. One of my questions is: can you create a program like this in python code or can you only do this using a webpage?
So far I have understood the following. To get images with a certain tag you have to:
need a valid access_token to even gain access to images by tag, which can be done like this. However, when I sign in you need to give a website. Does this indicate that you can only use the API's on websites rather than a python program for instance?
you use a media Tag Endpoint to search for tags by name.
I have no idea what the latest step will return exactly, but I expect that it will give me a specific image id that contains the tag. Correct? Now I will also need to get the likes belonging to these images. Just like latest step from before:
you use a likes Tag Endpoint to get a list of users that liked the image of which of course you can get the length.
If I can accomplish all of these steps it seems like I can achieve my original goal. I googled if there was something out there already. The only thing I could find was InstaRaider, but this did not seem to fit my description because it web scraped only the images from a specific user and not by tag or its likes. Any suggestions or ideas would be very helpful, I have only programmed in python and Java before..
I can only tell you that for URL you can use the localhost as this:
http://127.0.0.1
OR
http://localhost
I have also tried to do exactly the same before, but I could not, so I used a website to search for tags and images:
http://iconosquare.com/search/[HASHTAG]

Resources