The images used in our application are rendered from Amazon CloudFront.
When an existing image is modified, it does not reflect the image change immediately since CloudFront take around 24 hours to update.
As a workaround, I'm planning to call CreateInvalidation to reflect the file change immediately.
Is it possible to use this invalidation call without SDK?
Using ColdFusion programming language and does not seem to have SDK for this.
You can simply make POST request. Example on PHP from Steve Jenkins
<?php
/**
* Super-simple AWS CloudFront Invalidation Script
* Modified by Steve Jenkins <steve stevejenkins com> to invalidate a single file via URL.
*
* Steps:
* 1. Set your AWS Access Key
* 2. Set your AWS Secret Key
* 3. Set your CloudFront Distribution ID (or pass one via the URL with &dist)
* 4. Put cf-invalidate.php in a web accessible and password protected directory
* 5. Run it via: http://example.com/protected_dir/cf-invalidate.php?filename=FILENAME
* or http://example.com/cf-invalidate.php?filename=FILENAME&dist=DISTRIBUTION_ID
*
* The author disclaims copyright to this source code.
*
* Details on what's happening here are in the CloudFront docs:
* http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
*
*/
$onefile = $_GET['filename']; // You must include ?filename=FILENAME in your URL or this won't work
if (!isset($_GET['dist'])) {
$distribution = 'DISTRIBUTION_ID'; // Your CloudFront Distribution ID, or pass one via &dist=
} else {
$distribution = $_GET['dist'];
}
$access_key = 'AWS_ACCESS_KEY'; // Your AWS Access Key goes here
$secret_key = 'AWS_SECRET_KEY'; // Your AWS Secret Key goes here
$epoch = date('U');
$xml = <<<EOD
<InvalidationBatch>
<Path>{$onefile}</Path>
<CallerReference>{$distribution}{$epoch}</CallerReference>
</InvalidationBatch>
EOD;
/**
* You probably don't need to change anything below here.
*/
$len = strlen($xml);
$date = gmdate('D, d M Y G:i:s T');
$sig = base64_encode(
hash_hmac('sha1', $date, $secret_key, true)
);
$msg = "POST /2010-11-01/distribution/{$distribution}/invalidation HTTP/1.0\r\n";
$msg .= "Host: cloudfront.amazonaws.com\r\n";
$msg .= "Date: {$date}\r\n";
$msg .= "Content-Type: text/xml; charset=UTF-8\r\n";
$msg .= "Authorization: AWS {$access_key}:{$sig}\r\n";
$msg .= "Content-Length: {$len}\r\n\r\n";
$msg .= $xml;
$fp = fsockopen('ssl://cloudfront.amazonaws.com', 443,
$errno, $errstr, 30
);
if (!$fp) {
die("Connection failed: {$errno} {$errstr}\n");
}
fwrite($fp, $msg);
$resp = '';
while(! feof($fp)) {
$resp .= fgets($fp, 1024);
}
fclose($fp);
print '<pre>'.$resp.'</pre>'; // Make the output more readable in your browser
Some alternatives to invalidating an object are:
Updating Existing Objects Using Versioned Object Names, such as image_1.jpg changing to image_2.jpg
Configuring CloudFront to Cache Based on Query String Parameters such as configuring CloudFront to recognize parameters (eg ?version=1) as part of the filename, therefore your app can reference a new version by using ?version=2 and that forces CloudFront to treat it as a different object
For frequent modifications, I think the best approach is to append a query string to the image url (time stamp or hash value of the object) and configure Cloudfront to forward query strings, which will always return the latest image when query string differs.
For infrequent modifications, apart from SDK, you can use AWS CLI which will also allows to invalidate the cache upon builds, integrating with your CI/CD tools.
E.g
aws cloudfront create-invalidation --distribution-id S11A16G5KZMEQD \
--paths /index.html /error.html
Related
I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?
For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;
The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL
From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!
I'm trying to use Groovy to script a GET request to our GitLab server to retrieve a file. The API URI format is:
https://githost/api/v4/projects/<namespace>%2F<repo>/files/<path>?ref=<branch>
Note that there is an encoded '/' between namespace and repo. The final URI needs to look like the following to work properly:
https://githost/api/v4/projects/mynamespace%2Fmyrepo/files/myfile.json?ref=master
I have the following code:
File f = HttpBuilder.configure {
request.uri.scheme = scheme
request.uri.host = host
request.uri.path = "/api/v4/projects/${apiNamespace}%2F${apiRepoName}/repository/files/${path}/myfile.json"
request.uri.query.put("ref", "master")
request.contentType = 'application/json'
request.accept = 'application/json'
request.headers['PRIVATE-TOKEN'] = apiToken
ignoreSslIssues execution
}.get {
Download.toFile(delegate as HttpConfig, new File("${dest}/myfile.json"))
}
However, the %2F is re-encoded as %252F. I've tried multiple ways to try and create the URI so that it doesn't encode the %2F in between the namespace and repo, but I can't get anything to work. It either re-encodes the '%' or decodes it to the literal "/".
How do I do this using Groovy + http-builder-ng to set the URI in a way that will preserve the encoded "/"? I've searched but can't find any examples that have worked.
Thanks!
As of the 1.0.0 release you can handle requests with encoded characters in the URI. An example would be:
def result = HttpBuilder.configure {
request.raw = "http://localhost:8080/projects/myteam%2Fmyrepo/myfile.json"
}.get()
Notice, the use of raw rather than uri in the example. Using this approach requires you to do any other encoding/decoding of the Uri yourself.
Possible Workaround
The Gitlab API allows you to query via project id or project name. Look up the project id first, then query the project.
Lookup the project id first. See https://docs.gitlab.com/ee/api/projects.html#list-all-projects
def projects = // GET /projects
def project = projects.find { it['path_with_namespace'] == 'diaspora/diaspora-client' }
Fetch Project by :id, See https://docs.gitlab.com/ee/api/projects.html#get-single-project
GET /projects/${project.id}
I am attempting to create an Azure Managed Cache using PowerShell and the Azure Management API, this two pronged approach is required because the Offical Azure PowerShell Cmdlets only have very limited support for Creation and Update of Azure Managed Cache. There is however an established pattern for calling the Azure Management API from PowerShell.
My attempts at finding the correct API to call have been somewhat hampered by limited documentation on the Azure Managed Cache API. However after working my way through the cmdlets using both the source code and the -Debug option in PowerShell I have been able to find what appear to be the correct API endpoints, as such I have developed some code to access these endpoints.
However, I have become stuck after the PUT request has been accepted to the Azure API as subsequent calls to the Management API /operations endpoint show that the result of this Operation was Internal Server Error.
I have been using Joseph Alabarhari's LinqPad to explore the API as it allows me to rapidly itterate on a solution using the minimum possible code, so to execute the following code snippets you will need both LinqPad and the following extension in your My Extensions script:
public static X509Certificate2 GetCertificate(this StoreLocation storeLocation, string thumbprint) {
var certificateStore = new X509Store(StoreName.My, storeLocation);
certificateStore.Open(OpenFlags.ReadOnly);
var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
return certificates[0];
}
The complete source code including the includes are available below:
My Extensions - you can replace an "My Extensions" by right clicking My Extensions in the bottom left hand pane and choosing "Open Script Location in Windows Explorer" then replacing the highlighted file with this one. Alternatively you may wish to merge my extensions into your own.
Azure Managed Cache Script - you should simply be able to download and double click this, once open and the above extensions and certificates are in place you will be able to execute the script.
The following settings are used throughout the script, the following variables will need to it for anyone who is following along using their own Azure Subscription ID and Management Certificate:
var cacheName = "amc551aee";
var subscriptionId = "{{YOUR_SUBSCRIPTION_ID}}";
var certThumbprint = "{{YOUR_MANAGEMENT_CERTIFICATE_THUMBPRINT}}";
var endpoint = "management.core.windows.net";
var putPayloadXml = #"{{PATH_TO_PUT_PAYLOAD}}\cloudService.xml"
First I have done some setup on the HttpClient:
var handler = new WebRequestHandler();
handler.ClientCertificateOptions = ClientCertificateOption.Manual;
handler.ClientCertificates.Add(StoreLocation.CurrentUser.GetCertificate(certThumbprint));
var client = new HttpClient(handler);
client.DefaultRequestHeaders.Add("x-ms-version", "2012-08-01");
This configures HttpClient to both use a Client Certificate and the x-ms-version header, the first call to the API fetches the existing CloudService that contains the Azure Managed Cache. Please note this is using an otherwise empty Azure Subscription.
var getResult = client.GetAsync("https://" + endpoint + "/" + subscriptionId + "/CloudServices");
getResult.Result.Dump("GET " + getResult.Result.RequestMessage.RequestUri);
This request is successful as it returns StatusCode: 200, ReasonPhrase: 'OK', I then parse some key information out of the request: the CloudService Name, the Cache Name and the Cache ETag:
var cacheDataReader = new XmlTextReader(getResult.Result.Content.ReadAsStreamAsync().Result);
var cacheData = XDocument.Load(cacheDataReader);
var ns = cacheData.Root.GetDefaultNamespace();
var nsManager = new XmlNamespaceManager(cacheDataReader.NameTable);
nsManager.AddNamespace("wa", "http://schemas.microsoft.com/windowsazure");
var cloudServices = cacheData.Root.Elements(ns + "CloudService");
var serviceName = String.Empty;
var ETag = String.Empty;
foreach (var cloudService in cloudServices) {
if (cloudService.XPathSelectElements("//wa:CloudService/wa:Resources/wa:Resource/wa:Name", nsManager).Select(x => x.Value).Contains(cacheName)) {
serviceName = cloudService.XPathSelectElement("//wa:CloudService/wa:Name", nsManager).Value;
ETag = cloudService.XPathSelectElement("//wa:CloudService/wa:Resources/wa:Resource/wa:ETag", nsManager).Value;
}
}
I have pre-created a XML file that contains the payload of the following PUT request:
<Resource xmlns="http://schemas.microsoft.com/windowsazure">
<IntrinsicSettings>
<CacheServiceInput xmlns="">
<SkuType>Standard</SkuType>
<Location>North Europe</Location>
<SkuCount>1</SkuCount>
<ServiceVersion>1.3.0</ServiceVersion>
<ObjectSizeInBytes>1024</ObjectSizeInBytes>
<NamedCaches>
<NamedCache>
<CacheName>default</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
</NamedCache>
<NamedCache>
<CacheName>richard</CacheName>
<NotificationsEnabled>true</NotificationsEnabled>
<HighAvailabilityEnabled>true</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
</NamedCache>
</NamedCaches>
</CacheServiceInput>
</IntrinsicSettings>
</Resource>
I construcuct a HttpRequestMessage with the above Payload and a URL comprised of the CloudService and Cache Names:
var resourceUrl = "https://" + endpoint + "/" + subscriptionId + "/cloudservices/" + serviceName + "/resources/cacheservice/Caching/" + cacheName;
var data = File.ReadAllText(putPayloadXml);
XDocument.Parse(data).Dump("Payload");
var message = new HttpRequestMessage(HttpMethod.Put, resourceUrl);
message.Headers.TryAddWithoutValidation("If-Match", ETag);
message.Content = new StringContent(data, Encoding.UTF8, "application/xml");
var putResult = client.SendAsync(message);
putResult.Result.Dump("PUT " + putResult.Result.RequestMessage.RequestUri);
putResult.Result.Content.ReadAsStringAsync().Result.Dump("Content " + putResult.Result.RequestMessage.RequestUri);
This request is nominally accepted by the Azure Service Management API as it returns a StatusCode: 202, ReasonPhrase: 'Accepted' response; this essentially means that the payload has been accepted and will be processed offline, the Operation ID can be parsed out of the HTTP Header to retreve further information:
var requestId = putResult.Result.Headers.GetValues("x-ms-request-id").FirstOrDefault();
This requestId can be used to request an update upon the status of the operation:
var operation = client.GetAsync("https://" + endpoint + "/" + subscriptionId + "/operations/" + requestId);
operation.Result.Dump(requestId);
XDocument.Load(operation.Result.Content.ReadAsStreamAsync().Result).Dump("Operation " + requestId);
The request to the /operations endpoint results in the following payload:
<Operation xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<ID>5364614d-4d82-0f14-be41-175b3b85b480</ID>
<Status>Failed</Status>
<HttpStatusCode>500</HttpStatusCode>
<Error>
<Code>InternalError</Code>
<Message>The server encountered an internal error. Please retry the request.</Message>
</Error>
</Operation>
And this is where I am stuck, the chances are I am subtly malforming the request in such a way that the underlying request is throwing a 500 Internal Server Error, however without a more detailed error message or API documentation I don't think there is anywhere I can go with this.
We worked with Richard offline and the following XML payload got him un-blocked.
Note - When adding/removing named cache to an existing cache, the object size is fixed.
Note 2- The Azure Managed Cache API is sensitive to whitespace between the element and the element.
Also please note, we are working on adding Named cache capability to our PowerShell itself, so folks don't have to use APIs to do so.
<Resource xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<IntrinsicSettings><CacheServiceInput xmlns="" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SkuType>Standard</SkuType>
<Location>North Europe</Location>
<SkuCount>1</SkuCount>
<ServiceVersion>1.3.0</ServiceVersion>
<ObjectSizeInBytes>1024</ObjectSizeInBytes>
<NamedCaches>
<NamedCache>
<CacheName>default</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
<ExpirationSettings>
<TimeToLiveInMinutes>10</TimeToLiveInMinutes>
<Type>Absolute</Type>
</ExpirationSettings>
</NamedCache>
<NamedCache>
<CacheName>richard</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
<ExpirationSettings>
<TimeToLiveInMinutes>10</TimeToLiveInMinutes>
<Type>Absolute</Type>
</ExpirationSettings>
</NamedCache>
</NamedCaches>
</CacheServiceInput>
</IntrinsicSettings>
</Resource>
For security purposes, I try to allow only Mandrill's IP(s) to access these urls.
Does anyone know them?
Mandrill's signature is located in the HTTP response header: Authenticating-webhook-requests
In the request header find: X-Mandrill-Signature. This is a base64 of the hashcode, signed using web-hook key. This key is secret to your webhook only.
We have a range of IPs used for webhooks, but they can (and likely will) change or have new ones added as we scale. An alternative would be to add a query string to the webhook URL you add in Mandrill, and then check for that query string when a POST comes in so you can verify it's coming from Mandrill.
Just replace the constants and use this function:
<?php
function generateSignature($post)
{
$signed_data = WEB_HOOK_URL;
ksort($post);
foreach ($post as $key => $value) {
$signed_data .= $key;
$signed_data .= $value;
}
return base64_encode(hash_hmac('sha1', $signed_data, WEB_HOOK_AUTH_KEY, true));
}
//---
if (generateSignature($_POST) != $_SERVER['HTTP_X_MANDRILL_SIGNATURE']) {
//Invalid
}
?>
As described in mandrill's docs, they provide a signature to check if the request really came from them. to build the request there's a few steps:
start with the exact url of your webhook (mind slashes and params)
sort the post variables by key (in case of mandrill, you'll only have one post parameter: mandrill_events)
add key and value to the url, without any delimiter
hmac the url with your secret key (you can get the key from the web-interface) and base64 it.
compare the result with the X-Mandrill-Signature header
here's a sample implementation in python:
import hmac, hashlib
def check_mailchimp_signature(params, url, key):
signature = hmac.new(key, url, hashlib.sha1)
for key in sorted(params):
signature.update(key)
signature.update(params[key])
return signature.digest().encode("base64").rstrip("\n")
205.201.136.0/16
I have just whitelisted them in my server's firewall.
We don't need to white list the Ip they are using. Instead of that they have provided their own way to authenticate the webhook request.
When you are creating the mandrill webhook it will generate the key. It will come under the response we are getting to our post URL which is provided in the webhook.
public async Task<IHttpActionResult> MandrillEmailWebhookResponse()
{
string mandrillEvents = HttpContext.Current.Request.Form["mandrill_events"].Replace("mandrill_events=", "");
// validate the request is coming from mandrill API
string url = ConfigurationManager.AppSettings["mandrillWebhookUrl"];
string MandrillKey = ConfigurationManager.AppSettings["mandrillWebHookKey"];
url += "mandrill_events";
url += mandrillEvents;
byte[] byteKey = System.Text.Encoding.ASCII.GetBytes(MandrillKey);
byte[] byteValue = System.Text.Encoding.ASCII.GetBytes(url);
HMACSHA1 myhmacsha1 = new HMACSHA1(byteKey);
byte[] hashValue = myhmacsha1.ComputeHash(byteValue);
string generatedSignature = Convert.ToBase64String(hashValue);
string mandrillSignature = HttpContext.Current.Request.Headers["X-Mandrill-Signature"].ToString();
if (generatedSignature == mandrillSignature)
{
// validation = "Validation successful";
// do the updating using the response data
}
}
Is it possible to get static image URL from the flickr URL via an api call or some script ?
For eg :
Flickr URL -> http://www.flickr.com/photos/53067560#N00/2658147888/in/set-72157606175084388/
Static image URL -> http://farm4.static.flickr.com/3221/2658147888_826edc8465.jpg
With specifying extras=url_o you get a link to the original image:
https://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=YOURAPIKEY&format=json&nojsoncallback=1&text=cats&extras=url_o
For downscaled images, you use the following parameters: url_t, url_s, url_q, url_m, url_n, url_z, url_c, url_l
Alternatively, you can construct the URL as described:
http://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}.jpg
or
http://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}_[mstzb].jpg
In your Flickr URL, the photo ID is 2658147888. You use flickr.photos.getSizes to get the various sizes of the photo available, and pick the url you want from that, depending on the size. There are several ways to access the API so please specify if you want more details for a particular language.
You can also access the original image using the photoId (number before the first underscore)
http://flickr.com/photo.gne?id=photoId
In your case it would be:
https://www.flickr.com/photo.gne?id=2658147888
Not sure if you can get it directly through a single API call, but this link explains how the urls for the images are contructed: link
Here's some code I wrote to retrieve metadata from a Flickr Photo based on its ID:
I first defined a javascript object FlickrPhoto to hold the photo's metadata:
function FlickrPhoto(title, owner, flickrURL, imageURL) {
this.title = title;
this.owner = owner;
this.flickrURL = flickrURL;
this.imageURL = imageURL;
}
I then created a FlickrService object to hold my Flickr API Key and all my ajax calls to the RESTful API.
The getPhotoInfo function takes the Photo ID as parameter, constructs the appropriate ajax call and passes a FlickrPhoto object containing the photo metadata to a callback function.
function FlickrService() {
this.flickrApiKey = "763559574f01aba248683d2c09e3f701";
this.flickrGetInfoURL = "https://api.flickr.com/services/rest/?method=flickr.photos.getInfo&nojsoncallback=1&format=json";
this.getPhotoInfo = function(photoId, callback) {
var ajaxOptions = {
type: 'GET',
url: this.flickrGetInfoURL,
data: { api_key: this.flickrApiKey, photo_id: photoId },
dataType: 'json',
success: function (data) {
if (data.stat == "ok") {
var photo = data.photo;
var photoTitle = photo.title._content;
var photoOwner = photo.owner.realname;
var photoWebURL = photo.urls.url[0]._content;
var photoStaticURL = "https://farm" + photo.farm + ".staticflickr.com/" + photo.server + "/" + photo.id + "_" + photo.secret + "_b.jpg";
var flickrPhoto = new FlickrPhoto(photoTitle, photoOwner, photoWebURL, photoStaticURL);
callback(flickrPhoto);
}
}
};
$.ajax(ajaxOptions);
}
}
You can then use the service as follows:
var photoId = "11837138576";
var flickrService = new FlickrService();
flickrService.getPhotoInfo(photoId, function(photo) {
console.log(photo.imageURL);
console.log(photo.owner);
});
Hope it helps.
Below a solution without using flickr-apis, only standard Linux commands (actually I ran it on MS Windows with Cygwin):
Put your list of URLs in the tmp variable
If you are downloading private photos like me, the protocol will be https and you'll need to pass the authentication cookies to wget. I log on with a browser (Chrome) and exported the cookies file using an extension
If you access public URLs, just remove the parameter --load-cookies $cookies
The script downloads in the local folder the photos in their original format
If you want just the URL of the static image, remove the last command | xargs wget --load-cookies $cookies
Here the script, you can use it as a start for your explorations:
cookies=~/cookies.txt
root="https://www.flickr.com/photos/131469243#N02/"
tmp="https://www.flickr.com/photos/131469243#N02/29765108124/in/album-72157673986011342/
https://www.flickr.com/photos/131469243#N02/29765103724/in/album-72157673986011342/
https://www.flickr.com/photos/131469243#N02/29765102344/in/album-72157673986011342/"
while read -r url; do
if [[ $url == http* ]] ;
then
url2=$root`echo -n $url | grep -oP '(?<=https://www.flickr.com/photos/131469243#N02/)\w+'`/sizes/o
wget -q --load-cookies $cookies -O - $url2 | grep -io 'https://c[0-9].staticflickr.com.*_o_d.jpg' | xargs wget --load-cookies $cookies
fi
done <<< "$tmp";