How to get a published video URL - azure

I'm trying to build my app based on Windows Media Services REST API (i'm not using any SDK, just plain rest api requests).
My steps are more or less like:
Create Asset
Assign write Access Policy
Assign upload locator
Upload a file to URL specified by upload locator path
Assign download Access Policy
Assign download locator
All those steps seems to work great but - how can i actually get the video streaming URL? I can't see anything, which looks like such url (as far as i know, it should look similar to upload URL from upload locator). Should i "fold" it by myself using some segments from various parts of api ?
Based on this article, i should append the path parameter with name parameter and /manifest (so it should looke like that: <path_param>/<name_param>.ism/manifest) but it gives me ResourceNotFound error. Anyway, i've seen that other people (like SHIBSANKAR) have found some way to obtain all asset urls so i think there is some way to do it but they have not descrbed how they've done it.

After reading all the docs and talking with microsoft support, i have figured it out. All the URL parts are returned by download locator and the formula looks like that:
#{BaseUri}/video.mp4#{ContentAccessComponent}
I hope someone will find that usefull someday.

Related

Google Drive API v3 using Python

Can anybody give me sample code in python to find the folder Id (especially the last folder created) in google drive? Your help will be immensely appreciated.
Stackoverflow and the Drive API documentation have enough samples of python code for Google Drive API requests, you just need to define the basic steps and patch the corresponding code parts together
Any Google Drive API request need to be based on the Drive API Quickstart for Python which implements OAuth2 authorization flow and creation of an authenticated service.
Once you have this, you can list your files to retrieve their Ids.
In order to narrow down the results, you can define the search parameter q, e.g. specifying mimeType = 'application/vnd.google-apps.folder'.
Use the parameter orderBy to request that the most recent modified folders will be shown first.
Use the paramter pageSize to define how many results you want to obtain (if you want to obtain the newst folder Id only - 1 is a valid value).
Stackoverflow is not meant to help you write a code from scratch, I recommend you to search e.g. with the following specifications for similar questions and try to patch together your code yourself.
Then, if necessary, post a new question with your code explaining where you got stuck and asking for specific help.
Hint: Before implementing your request into Python, test it with the "Try this API" functionality of the Files:
list to make sure that you adapted the parameters correctly to your needs

Is it possible to have a link to raw content of file in Azure DevOps

It's possible to generate a link to raw content of the file in GitHub, is it possible to do with VSTS/DevOps?
Even after reading the existing answers, I still struggled with this a bit, so I wanted to leave a bit more of a thorough response.
As others have said, the pattern is (query split onto separate lines for ease of reading):
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/{{providerName}}/filecontents
?repository={{repository}}
&path={{path}}
&commitOrBranch={{commitOrBranch}}
&api-version=5.0-preview.1
But how do you find the values for these variables? If you go into your Azure DevOps, choose Repos > Files from the left navigation, and select a particular file, your current url should look something like this:
https://dev.azure.com/{{organization}}/{{project}}/_git/{{repository}}?path=%2Fpackage.json
You should use those values for organization, project, and repository. For path, you'll see an HTTP encoded version of the unix file path. %2F is the HTTP encoding for /, so that path is actually just /package.json (a tool like Postman will do that encoding for you).
Commit or branch is pretty self explanatory; you either know what you want for this value or you should use master. I have "hard-coded" the api version in the above url because that's what the documentation currently points to.
For the last variable, you need providerName. In short, you should probably use TfsGit. I got this value from looking through the list of source providers and looking for one with a value of true for supportedCapabilities.queryFileContents.
However, if you just request this URL you'll get a "203 Non-Authoritative Information" response back because you still need to authenticate yourself. Referring again to the same documentation, it says to use Basic auth with any value for the username and a personal access token for the password. You can create a personal access token at https://dev.azure.com/{{organization}}/_usersSettings/tokens; ensure that it has the Token Administration - Read & Manage permission.
If you're unfamiliar with this sort of thing, again Postman is super helpful with getting these requests working before you get into the code.
So if you have a repository with a src directory at the root, and you're trying to get the file contents of src/package.json, your URL should look something like:
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/TfsGit/filecontents?repository={{repository}}&commitOrBranch=master&api-version={{api-version}}&path=src%2Fpackage.json
And don't forget the basic auth!
Sure, here's the rests call needed:
GET https://feeds.dev.azure.com/{organization}/_apis/packaging/Feeds/{feedId}/packages/{packageId}?includeAllVersions={includeAllVersions}&includeUrls={includeUrls}&isListed={isListed}&isRelease={isRelease}&includeDeleted={includeDeleted}&includeDescription={includeDescription}&api-version=5.0-preview.1
https://learn.microsoft.com/en-us/rest/api/azure/devops/artifacts/artifact%20%20details/get%20package?view=azure-devops-rest-5.0#package
I was able to get the raw contents of a file using this URL.
GET https://dev.azure.com/{organization}/{project}/_apis/sourceProviders/{providerName}/filecontents?serviceEndpointId={serviceEndpointId}&repository={repository}&commitOrBranch={commitOrBranch}&path={path}&api-version=5.0-preview.1
I got this from here.
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/source%20providers/get%20file%20contents?view=azure-devops-rest-5.0
You can obtain the raw URL using chrome.
Turn on Developer tools and view the Network tab.
Navigate to view the required file in the DevOps portal (Content panel). Once the content view is visible check the network tab again and find the URL which starts with "Items?Path", this is json response which contains the required "url:" element.
Drag the filename from the attachments windows and drop it in to any other MS application to get the raw URL or linked filename.
Most answers address this well, but in context of a public repo with anonymous access the api is different. Here is the one that works in such a scenario:
https://dev.azure.com/{{your_user_name}}/{{project_name}}/_apis/git/repositories/{{repo_name_encoded}}/items?scopePath={{path_to_your_file}}&api-version=6.0
This is the exact equivalent of the "raw" url provided by Github.
Another way that may be helpful if you want to quickly get the raw URL for a specific file that you are browsing:
install the browser extension named "Undisposition"
from the dot menu (top right) choose "Download": the file will open in a new browser tab from which you can copy the URL
(edit: unfortunately this will only work for file types that the browser knows how to open, otherwise it will still offer to download it...)
I am fairly new to this and had an issue accessing a raw file in an Azure DevOps Repo. It's straightforward in Github.
I wanted to download a file in CMD and BASH using Curl.
First I browsed to the file contents in the browser make a note of the bold sections:
https://dev.azure.com/**myOrg**/_git/**myProjectName**?path=%2F**MyFileName.ps1**
I then constructed the URL similar to what #Zach posted above.
https://dev.azure.com/**myOrg**/**myProjectName**/_apis/sourceProviders/TfsGit/filecontents?repository=**myProjectName**&commitOrBranch=**master**&api-version=5.0-preview.1&path=%2F**MyFileName.ps1**
Now when I paste the above URL in the browser it displays the content in RAW form similar to GitHub.
The difference was I had to setup a PAT (Personal Access Token) in My Azure DevOps account then authenticate the URL in DOS/BASH example below:
curl -u "<username>:<password>" "https://dev.azure.com/myOrg/myProjectName/_apis/sourceProviders/TfsGit/filecontents?repository=myProjectName&commitOrBranch=master&api-version=5.0-preview.1&path=%2FMyFileName.ps1" -# -L -o MyFileName.ps1

streaming video by origin URL with azure media services

I'm trying to make an app with Smooth Streaming. So I'm doing my app with examples
like these.
In result I have many URLs. Some of them is URL for files that I encoded, they are like:
<mediaservicename>.blob.core.windows.net/asset-d66c43e8-a142-4618-8539-39a2bbb14300/BigBuckBunny_650.mp4?sv=2012-02-12&se=2013-06-23T15%3A21%3A16Z&sr=c&si=aff41a1d-6c8a-4387-8c2f-84272a776ff2&sig=8OPuwW6Kssn2EVQYwqUXkUocc7Qhf0xM62rS9aSPsMk%3D
And one of URL is like:
<mediaservicename>.origin.mediaservices.windows.net/6eca30d3-badd-4f45-bc29-264303ffe84a/BigBuckBunny_3400.ism/Manifest
When I try playing the first one on WindowsAzure portal - that's ok.
But when I'm trying to play the second one on WindowsAzure portal - there is an error "we are unable to connect to the content you've requested. We apologize for the inconvenience".
When I'm trying to play them both in my app with Silverlight they do not play as well as on smf.cloudapp.net / healthmonitor.
Maybe there are some errors in the examples on Windiws Azure site? Or what can it be?
The first url you copied cannot be used in a Smooth Streaming player, but the second one may be, if you have created a valid origin locator with a valid access policy.
Can you copy the code you have used to generate these URLs please ?
Hope this helps
Julien

Grab instagram photo based on hashtags

I am new to instagram and i am tasked to program an application to grab instagram photo uploads based on a certain hashtag. Meaning if the application is started and searching for the hashtag "#awesomeevent" any one that uploads a photo with that hashtags it will automatically be stored into our database.
The application should work something similar to http://statigr.am/tag/ but instead displaying the photos it should store the photos into the database.
What is the process of doing this. Any tutorials that has this from start to end. Even covering how to start creating a instagram app from scratch. any help would be greatly appreciated.
Thanks
Things we developers often overlook are the API Terms and Conditions. I've been there myself.
API TERMS OF USE
Before you start using the API, we have a few guidelines that we'd like to tell you about. Please make sure to read the full API Terms of Use
Terms of Use. Here's what you'll read about:
Instagram users own their images. It's your responsibility to make sure that you respect that right.
You cannot use the Instagram name in your application.
You cannot use the Instagram API to crawl or store users' images without their express consent.
You cannot replicate the core user experience of Instagram.com
Do not abuse the API. Too many requests too quickly will get your access turned off
However, a part in the terms also states that:
You shall not cache or store any Instagram user photos other than for reasonable periods in order to provide the service you are
providing to Instagram users.
Hope that's a start before you actually get coding and storing images.
API Terms of Use: http://instagram.com/about/legal/terms/api/
API: http://instagram.com/developer/
For starter, you should consult to instagram api.
As for the specific api you will need is:
/tags/tag-name/media/recent
For example, if you want to look for images from tag #awesomeevent, you will do an api query to:
https://api.instagram.com/v1/tags/awesomeevent/media/recent?access_token=ACCESS-TOKEN
I would have a look at the two libraries Instagram provides. The ruby library is at https://github.com/Instagram/instagram-ruby-gem and the python library is at https://github.com/Instagram/python-instagram
They both seem to have examples to get you started if you're programming with either libraries.
As far as the storing issue goes, could you instead store the URL address of the images instead of the actual images themselves? The API returns JSON information of which the URL of the images are returned.
Hope that helps.
You can use the below ruby script to retrieve the images and save them to a file. You can then either reference the file within the database or replace the last block with code for your particular database implementation. Without knowing your database type and schema, no one can tell you how to add something to it.
require "instagram"
require "restclient"
Instagram.configure do |config|
config.client_id = INSTAGRAM_CLIENT_ID
config.client_secret = INSTAGRAM_CLIENT_SECRET
end
instagram_client = Instagram.client(:access_token => INSTAGRAM_ACCESS_TOKEN)
tags = instagram_client.tag_search('cat')
urls = Array.new
for media_item in instagram_client.tag_recent_media(tags[0].name)
urls << media_item.images.standard_resolution.url
end
urls.each_with_index do |url, idx|
image = RestClient.get(url)
path = Dir.pwd + "/#{idx}.jpg"
File.open(path, 'w') {|f| f.write(image) }
end

Download images containing a specific tag with likes from Instagram

I would like to download images with a certain tag from Instagram with their likes. With this post I hope to get some advice or tips on how to do this. I have no experience with web scraping related stuff or web API usages. One of my questions is: can you create a program like this in python code or can you only do this using a webpage?
So far I have understood the following. To get images with a certain tag you have to:
need a valid access_token to even gain access to images by tag, which can be done like this. However, when I sign in you need to give a website. Does this indicate that you can only use the API's on websites rather than a python program for instance?
you use a media Tag Endpoint to search for tags by name.
I have no idea what the latest step will return exactly, but I expect that it will give me a specific image id that contains the tag. Correct? Now I will also need to get the likes belonging to these images. Just like latest step from before:
you use a likes Tag Endpoint to get a list of users that liked the image of which of course you can get the length.
If I can accomplish all of these steps it seems like I can achieve my original goal. I googled if there was something out there already. The only thing I could find was InstaRaider, but this did not seem to fit my description because it web scraped only the images from a specific user and not by tag or its likes. Any suggestions or ideas would be very helpful, I have only programmed in python and Java before..
I can only tell you that for URL you can use the localhost as this:
http://127.0.0.1
OR
http://localhost
I have also tried to do exactly the same before, but I could not, so I used a website to search for tags and images:
http://iconosquare.com/search/[HASHTAG]

Resources