How to get static image url from flickr URL? - flickr

Is it possible to get static image URL from the flickr URL via an api call or some script ?
For eg :
Flickr URL -> http://www.flickr.com/photos/53067560#N00/2658147888/in/set-72157606175084388/
Static image URL -> http://farm4.static.flickr.com/3221/2658147888_826edc8465.jpg

With specifying extras=url_o you get a link to the original image:
https://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=YOURAPIKEY&format=json&nojsoncallback=1&text=cats&extras=url_o
For downscaled images, you use the following parameters: url_t, url_s, url_q, url_m, url_n, url_z, url_c, url_l
Alternatively, you can construct the URL as described:
http://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}.jpg
or
http://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}_[mstzb].jpg

In your Flickr URL, the photo ID is 2658147888. You use flickr.photos.getSizes to get the various sizes of the photo available, and pick the url you want from that, depending on the size. There are several ways to access the API so please specify if you want more details for a particular language.

You can also access the original image using the photoId (number before the first underscore)
http://flickr.com/photo.gne?id=photoId
In your case it would be:
https://www.flickr.com/photo.gne?id=2658147888

Not sure if you can get it directly through a single API call, but this link explains how the urls for the images are contructed: link

Here's some code I wrote to retrieve metadata from a Flickr Photo based on its ID:
I first defined a javascript object FlickrPhoto to hold the photo's metadata:
function FlickrPhoto(title, owner, flickrURL, imageURL) {
this.title = title;
this.owner = owner;
this.flickrURL = flickrURL;
this.imageURL = imageURL;
}
I then created a FlickrService object to hold my Flickr API Key and all my ajax calls to the RESTful API.
The getPhotoInfo function takes the Photo ID as parameter, constructs the appropriate ajax call and passes a FlickrPhoto object containing the photo metadata to a callback function.
function FlickrService() {
this.flickrApiKey = "763559574f01aba248683d2c09e3f701";
this.flickrGetInfoURL = "https://api.flickr.com/services/rest/?method=flickr.photos.getInfo&nojsoncallback=1&format=json";
this.getPhotoInfo = function(photoId, callback) {
var ajaxOptions = {
type: 'GET',
url: this.flickrGetInfoURL,
data: { api_key: this.flickrApiKey, photo_id: photoId },
dataType: 'json',
success: function (data) {
if (data.stat == "ok") {
var photo = data.photo;
var photoTitle = photo.title._content;
var photoOwner = photo.owner.realname;
var photoWebURL = photo.urls.url[0]._content;
var photoStaticURL = "https://farm" + photo.farm + ".staticflickr.com/" + photo.server + "/" + photo.id + "_" + photo.secret + "_b.jpg";
var flickrPhoto = new FlickrPhoto(photoTitle, photoOwner, photoWebURL, photoStaticURL);
callback(flickrPhoto);
}
}
};
$.ajax(ajaxOptions);
}
}
You can then use the service as follows:
var photoId = "11837138576";
var flickrService = new FlickrService();
flickrService.getPhotoInfo(photoId, function(photo) {
console.log(photo.imageURL);
console.log(photo.owner);
});
Hope it helps.

Below a solution without using flickr-apis, only standard Linux commands (actually I ran it on MS Windows with Cygwin):
Put your list of URLs in the tmp variable
If you are downloading private photos like me, the protocol will be https and you'll need to pass the authentication cookies to wget. I log on with a browser (Chrome) and exported the cookies file using an extension
If you access public URLs, just remove the parameter --load-cookies $cookies
The script downloads in the local folder the photos in their original format
If you want just the URL of the static image, remove the last command | xargs wget --load-cookies $cookies
Here the script, you can use it as a start for your explorations:
cookies=~/cookies.txt
root="https://www.flickr.com/photos/131469243#N02/"
tmp="https://www.flickr.com/photos/131469243#N02/29765108124/in/album-72157673986011342/
https://www.flickr.com/photos/131469243#N02/29765103724/in/album-72157673986011342/
https://www.flickr.com/photos/131469243#N02/29765102344/in/album-72157673986011342/"
while read -r url; do
if [[ $url == http* ]] ;
then
url2=$root`echo -n $url | grep -oP '(?<=https://www.flickr.com/photos/131469243#N02/)\w+'`/sizes/o
wget -q --load-cookies $cookies -O - $url2 | grep -io 'https://c[0-9].staticflickr.com.*_o_d.jpg' | xargs wget --load-cookies $cookies
fi
done <<< "$tmp";

Related

Microsoft Azure Custom Vision API nodeJS - classifyImageUrl() error "BadRequestImageUrl"

I try atm to classify an image via the API from Microsoft on nodeJS.
The network is alreaday trained and I can "connect" to the my algorithm. I want to send a base64 string as a dataUri but then I get this error-message: "Code: BadRequestImageUrl, message: Invalid image url"
The variable "img" is a base64 string (from a FHIR-Observation-Object) and correct (on a webside the url works with the base64).
I try out to senda image from Wikipedia. But then I have an other error: "NoFoundIteration / Invalid iteration"
const PredictionAPIClient = require("azure-cognitiveservices-customvision-prediction");
const predictionKey = "xxxx";
const endPoint = "https://southcentralus.api.cognitive.microsoft.com"
const projectId = "xxxxx";
const publishedName = "myMLName";
...
var img = 'iVBORw0KGgoAAAANSUhEUgAAAgAAAAJmCAYAAA...'; //base64
...
tempUrl= { url: 'data:image/png;base64,' + img };
...
predictor.classifyImageUrl(projectId, publishedName, tempUrl)
.then((resultJSON) => {
console.log("RESULT ######################")
//console.log(resultJSON);})
.catch((error) => {
console.log("ERROR #####################");
console.log(error);}
);
I should get a JSON form Microsoft Azure with the results.
Have a look to the documentation of the API behind the package you are using: https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c14
You can see that Classify has 2 methods:
ClassifyImage, which is using an image in application/octet-stream
ClassifyImageUrl, which is using an url for input
Data URL are not supported, you must use classic URL (and the image must be publicly accessible: don't use an URL pointing to an endpoint that need an authentication)
For your iteration error, make sure that you use your iteration name in publishedName, not your project name.
Example here, with value in field "Published as":

Google Cloud Storage get signedUrl from CDN npm

I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?
For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;
The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL
From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!

Access Azure Blob storage account from azure data factory

I have a folder with list of files in my storage account and having been trying to delete one of the files using pipeline. In-order to get that done I have used "Web" in pipeline, copied the blob storage url and access keys.
Tired using the access keys directly under Headers|Authorization. Also tried the concept of Shared Keys at https://learn.microsoft.com/en-us/azure/storage/common/storage-rest-api-auth#creating-the-authorization-header
Even tried getting this work with curl, but it returned an Authentication Error every time I tried to run
# List the blobs in an Azure storage container.
echo "usage: ${0##*/} <storage-account-name> <container-name> <access-key>"
storage_account="$1"
container_name="$2"
access_key="$3"
blob_store_url="blob.core.windows.net"
authorization="SharedKey"
request_method="DELETE"
request_date=$(TZ=GMT LC_ALL=en_US.utf8 date "+%a, %d %h %Y %H:%M:%S %Z")
#request_date="Mon, 18 Apr 2016 05:16:09 GMT"
storage_service_version="2018-03-28"
# HTTP Request headers
x_ms_date_h="x-ms-date:$request_date"
x_ms_version_h="x-ms-version:$storage_service_version"
# Build the signature string
canonicalized_headers="${x_ms_date_h}\n${x_ms_version_h}"
canonicalized_resource="/${storage_account}/${container_name}"
string_to_sign="${request_method}\n\n\n\n\n\n\n\n\n\n\n\n${canonicalized_headers}\n${canonicalized_resource}\ncomp:list\nrestype:container"
# Decode the Base64 encoded access key, convert to Hex.
decoded_hex_key="$(echo -n $access_key | base64 -d -w0 | xxd -p -c256)"
# Create the HMAC signature for the Authorization header
signature=$(printf "$string_to_sign" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:$decoded_hex_key" -binary | base64 -w0)
authorization_header="Authorization: $authorization $storage_account:$signature"
curl \
-H "$x_ms_date_h" \
-H "$x_ms_version_h" \
-H "$authorization_header" \
-H "Content-Length: 0"\
-X DELETE "https://${storage_account}.${blob_store_url}/${container_name}/myfile.csv_123"
The curl command returns an error:
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:XX
Time:2018-08-09T10:09:41.3394688Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request 'xxx' is not the same as any computed signature. Server used following string to sign: 'DELETE
You cannot authorize directly from the Data Factory to the storage account API. I suggest that you use an Logic App. The Logic App has built in support for Blob store:
https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azureblobstorage
You can call the Logic App from the Data Factory Web Activity. Using the body of the Data Factory request you can pass variables to the Logic app like the blob path.
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.Rest;
using Microsoft.Azure.Management.ResourceManager;
using Microsoft.Azure.Management.DataFactory;
using Microsoft.Azure.Management.DataFactory.Models;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Microsoft.WindowsAzure.Storage;
namespace ClearLanding
{
class Program
{
static void Main(string[] args)
{
CloudStorageAccount backupStorageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=https;AccountName=yyy;AccountKey=xxx;EndpointSuffix=core.windows.net");
var backupBlobClient = backupStorageAccount.CreateCloudBlobClient();
var backupContainer = backupBlobClient.GetContainerReference("landing");
var tgtBlobClient = backupStorageAccount.CreateCloudBlobClient();
var tgtContainer = tgtBlobClient.GetContainerReference("backup");
string[] folderNames = args[0].Split(new char[] { ',', ' ' }, StringSplitOptions.RemoveEmptyEntries);
foreach (string folderName in folderNames)
{
var list = backupContainer.ListBlobs(prefix: folderName + "/", useFlatBlobListing: false);
foreach (Microsoft.WindowsAzure.Storage.Blob.IListBlobItem item in list)
{
if (item.GetType() == typeof(Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob))
{
Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob blob = (Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob)item;
if (!blob.Name.ToUpper().Contains("DO_NOT_DEL"))
{
var tgtBlob = tgtContainer.GetBlockBlobReference(blob.Name + "_" + DateTime.Now.ToString("yyyyMMddHHmmss"));
tgtBlob.StartCopy(blob);
blob.Delete();
}
}
}
}
}
}
}
I tried resolving this by compiling the above code and referencing it using custom activity in C# pipeline. The code snippet above transfers files from landing folder to a backup folder and deletes the file from landing

How do I escape a '/' in the URI for a GET request?

I'm trying to use Groovy to script a GET request to our GitLab server to retrieve a file. The API URI format is:
https://githost/api/v4/projects/<namespace>%2F<repo>/files/<path>?ref=<branch>
Note that there is an encoded '/' between namespace and repo. The final URI needs to look like the following to work properly:
https://githost/api/v4/projects/mynamespace%2Fmyrepo/files/myfile.json?ref=master
I have the following code:
File f = HttpBuilder.configure {
request.uri.scheme = scheme
request.uri.host = host
request.uri.path = "/api/v4/projects/${apiNamespace}%2F${apiRepoName}/repository/files/${path}/myfile.json"
request.uri.query.put("ref", "master")
request.contentType = 'application/json'
request.accept = 'application/json'
request.headers['PRIVATE-TOKEN'] = apiToken
ignoreSslIssues execution
}.get {
Download.toFile(delegate as HttpConfig, new File("${dest}/myfile.json"))
}
However, the %2F is re-encoded as %252F. I've tried multiple ways to try and create the URI so that it doesn't encode the %2F in between the namespace and repo, but I can't get anything to work. It either re-encodes the '%' or decodes it to the literal "/".
How do I do this using Groovy + http-builder-ng to set the URI in a way that will preserve the encoded "/"? I've searched but can't find any examples that have worked.
Thanks!
As of the 1.0.0 release you can handle requests with encoded characters in the URI. An example would be:
def result = HttpBuilder.configure {
request.raw = "http://localhost:8080/projects/myteam%2Fmyrepo/myfile.json"
}.get()
Notice, the use of raw rather than uri in the example. Using this approach requires you to do any other encoding/decoding of the Uri yourself.
Possible Workaround
The Gitlab API allows you to query via project id or project name. Look up the project id first, then query the project.
Lookup the project id first. See https://docs.gitlab.com/ee/api/projects.html#list-all-projects
def projects = // GET /projects
def project = projects.find { it['path_with_namespace'] == 'diaspora/diaspora-client' }
Fetch Project by :id, See https://docs.gitlab.com/ee/api/projects.html#get-single-project
GET /projects/${project.id}

Uploading a file from Autodesk A360 to bucket in NodeJS

I am using the Forge data management API to access my A360 files and aim to translate them into the SVF format so that I can view them in my viewer. So far I have been able to reach the desired item using the ForgeDataManagement.ItemsApi, but I don't know what to do with the item to upload it to the bucket in my application.
From the documentation it seems like uploadObject is the way to go (https://github.com/Autodesk-Forge/forge.oss-js/blob/master/docs/ObjectsApi.md#uploadObject), but I don't know exactly how to make this function work.
var dmClient = ForgeDataManagement.ApiClient.instance;
var dmOAuth = dmClient.authentications ['oauth2_access_code'];
dmOAuth.accessToken = tokenSession.getTokenInternal();
var itemsApi = new ForgeDataManagement.ItemsApi();
fileLocation = decodeURIComponent(fileLocation);
var params = fileLocation.split('/');
var projectId = params[params.length - 3];
var resourceId = params[params.length - 1];
itemsApi.getItemVersions(projectId, resourceId)
.then (function(itemVersions) {
if (itemVersions == null || itemVersions.data.length == 0) return;
// Use the latest version of the item (file).
var item = itemVersions.data[0];
var contentLength = item.attributes.storageSize;
var body = new ForgeOSS.InputStream();
// var body = item; // Using the item directly does not seem to work.
// var stream = fs.createReadStream(...) // Should I create a stream object lik suggested in the documention?
objectsAPI.uploadObject(ossBucketKey, ossObjectName, contentLength, body, {}, function(err, data, response) {
if (err) {
console.error(err);
} else {
console.log('API called successfully. Returned data: ' + data);
//To be continued...
}
I hope someone can help me out!
My current data:
ossObjectName = "https://developer.api.autodesk.com/data/v1/projects/"myProject"/items/urn:"myFile".dwfx";
ossBucketKey = "some random string based on my username and id";
Regards,
torjuss
When using the DataManagement API, you can either work with
2 legged oAuth (client_credentials) and access OSS' buckets and objects,
or 3 legged (authorization_code) and access a user' Hubs, Projects, Folders, Items, and Revisions
When using 3 legged, you do access someones content on A360, or BIM360 and these files are automatically translated by the system, so you do not need to translate them again, not to transfer them on a 2 legged application bucket. The only thing you need to do it is get the manifest of the Item or its revision and use the URN to see it in the viewer.
Checkout an example here: https://developer.autodesk.com/en/docs/data/v2/reference/http/projects-project_id-versions-version_id-GET/
you'll see something like
Examples: Successful Retrieval of a Specific Version (200)
curl -X GET -H "Authorization: Bearer kEnG562yz5bhE9igXf2YTcZ2bu0z" "https://developer.api.autodesk.com/data/v1/projects/a.45637/items/urn%3Aadsk.wipprod%3Adm.lineage%3AhC6k4hndRWaeIVhIjvHu8w"
{
"data": {
"relationships": {
"derivatives": {
"meta": {
"link": {
"href": "/modelderivative/v2/designdata/dXJuOmFkc2sud2lwcWE6ZnMuZmlsZTp2Zi50X3hodWwwYVFkbWhhN2FBaVBuXzlnP3ZlcnNpb249MQ/manifest"
}
},
Now, to answer teh other question about upload, I got an example available here:
https://github.com/Autodesk-Forge/forge.commandline-nodejs/blob/master/forge-cb.js#L295. I copied the relevant code here for everyone to see how to use it:
fs.stat (file, function (err, stats) {
var size =stats.size ;
var readStream =fs.createReadStream (file) ;
ossObjects.uploadObject (bucketKey, fileKey, size, readStream, {}, function (error, data, response) {
...
}) ;
}) ;
Just remember that ossObjects is for 2 legged, where as Items, Versions are 3 legged.
We figured out how to get things working after some support from Adam Nagy. To put it simple we had to do everything by use of 3 legged OAuth since all operations involves a document from an A360 account. This includes accessing and showing the file structure, translating a document to SVF, starting the viewer and loading the document into the viewer.
Also, we were targeting the wrong id when trying to translate the document. This post shows how easily it can be done now, thanks to Adam for the info!

Resources