I try atm to classify an image via the API from Microsoft on nodeJS.
The network is alreaday trained and I can "connect" to the my algorithm. I want to send a base64 string as a dataUri but then I get this error-message: "Code: BadRequestImageUrl, message: Invalid image url"
The variable "img" is a base64 string (from a FHIR-Observation-Object) and correct (on a webside the url works with the base64).
I try out to senda image from Wikipedia. But then I have an other error: "NoFoundIteration / Invalid iteration"
const PredictionAPIClient = require("azure-cognitiveservices-customvision-prediction");
const predictionKey = "xxxx";
const endPoint = "https://southcentralus.api.cognitive.microsoft.com"
const projectId = "xxxxx";
const publishedName = "myMLName";
...
var img = 'iVBORw0KGgoAAAANSUhEUgAAAgAAAAJmCAYAAA...'; //base64
...
tempUrl= { url: 'data:image/png;base64,' + img };
...
predictor.classifyImageUrl(projectId, publishedName, tempUrl)
.then((resultJSON) => {
console.log("RESULT ######################")
//console.log(resultJSON);})
.catch((error) => {
console.log("ERROR #####################");
console.log(error);}
);
I should get a JSON form Microsoft Azure with the results.
Have a look to the documentation of the API behind the package you are using: https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c14
You can see that Classify has 2 methods:
ClassifyImage, which is using an image in application/octet-stream
ClassifyImageUrl, which is using an url for input
Data URL are not supported, you must use classic URL (and the image must be publicly accessible: don't use an URL pointing to an endpoint that need an authentication)
For your iteration error, make sure that you use your iteration name in publishedName, not your project name.
Example here, with value in field "Published as":
Related
In a MERN + Firebase project, I have an image data string that I want to upload and then get the access token of that file.
The image data string is of the following form:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOoAAAClCAYAAABSmmH3AAAAAXNSR0IArs4c6QAABFJJREFUeF7t1oENg0AQA8F8/6XSAx8pVWTloQIzZwvOvfd+PAQI/LXAMdS/vo9wBH4ChqoIBAIChho4kogEDFUHCAQEDDVwJBEJGKoOEAgIGGrgSCISMFQdIBAQMNTAkUQkYKg6QCAgYKiBI4lIwFB1gEBAwFADRxKRgKHqAIGAgKEGjiQiAUPVAQIBAUMNHElEAoaqAwQCAoYaOJKIBAxVBwgEBAw1cCQRCRiqDhAICBhq4EgiEjBUHSAQEDDUwJFEJGCoOkAgIGCogSOJSMBQdYBAQMBQA0cSkYCh6gCBgIChBo4kIgFD1QECAQFDDRxJRAKGqgMEAgKGGjiSiAQMVQcIBAQMNXAkEQkYqg4QCAgYauBIIhIwVB0gEBAw1MCRRCRgqDpAICBgqIEjiUjAUHWAQEDAUANHEpGAoeoAgYCAoQaOJCIBQ9UBAgEBQw0cSUQChqoDBAIChho4kogEDFUHCAQEDDVwJBEJGKoOEAgIGGrgSCISMFQdIBAQMNTAkUQkYKg6QCAgYKiBI4lIwFB1gEBAwFADRxKRgKHqAIGAgKEGjiQiAUPVAQIBAUMNHElEAoaqAwQCAoYaOJKIBAxVBwgEBAw1cCQRCRiqDhAICBhq4EgiEjBUHSAQEDDUwJFEJGCoOkAgIGCogSOJSMBQdYBAQMBQA0cSkYCh6gCBgIChBo4kIgFD1QECAQFDDRxJRAKGqgMEAgKGGjiSiAQMVQcIBAQMNXAkEQkYqg4QCAgYauBIIhIwVB0gEBAw1MCRRCRgqDpAICBgqIEjiUjAUHWAQEDAUANHEpGAoeoAgYCAoQaOJCIBQ9UBAgGB8zzPDeQUkcC0wHnf11CnK+DlCwJ+fQtXknFewBd1vgIACgK+qIUryTgvYKjzFQBQEDDUwpVknBcw1PkKACgIGGrhSjLOCxjqfAUAFAQMtXAlGecFDHW+AgAKAoZauJKM8wKGOl8BAAUBQy1cScZ5AUOdrwCAgoChFq4k47yAoc5XAEBBwFALV5JxXsBQ5ysAoCBgqIUryTgvYKjzFQBQEDDUwpVknBcw1PkKACgIGGrhSjLOCxjqfAUAFAQMtXAlGecFDHW+AgAKAoZauJKM8wKGOl8BAAUBQy1cScZ5AUOdrwCAgoChFq4k47yAoc5XAEBBwFALV5JxXsBQ5ysAoCBgqIUryTgvYKjzFQBQEDDUwpVknBcw1PkKACgIGGrhSjLOCxjqfAUAFAQMtXAlGecFDHW+AgAKAoZauJKM8wKGOl8BAAUBQy1cScZ5AUOdrwCAgoChFq4k47yAoc5XAEBBwFALV5JxXsBQ5ysAoCBgqIUryTgvYKjzFQBQEDDUwpVknBcw1PkKACgIGGrhSjLOCxjqfAUAFAQMtXAlGecFDHW+AgAKAoZauJKM8wKGOl8BAAUBQy1cScZ5AUOdrwCAgoChFq4k47yAoc5XAEBBwFALV5JxXsBQ5ysAoCDwBaT9ke70WG4vAAAAAElFTkSuQmCC
This is the reference to where the file should be uploaded:
const imageRef: StorageReference = ref(
storage,
`/issueImages/${firebaseImageId}`
);
So far, I have attempted to use the 'put' function with the imageRef, and when I try using uploadBytes() of firebase, I have to upload it as a Buffer, and even then I cannot seem to find the access token in the metadata.
To upload a data URL to Firebase, you would use storageRef.putString(url, 'DATA_URL') (legacy) or uploadString(storageRef, url, 'DATA_URL') (modern) depending on the SDK you are using.
When you upload a file to a Cloud Storage bucket, it will not be issued an access token until a client calls its version of getDownloadURL(). So to fix your issue, you would call getDownloadURL() immediately after upload.
If Node is running on a client's machine, you would use:
// legacy syntax
import * as firebase from "firebase";
// reference to file
const imageStorageRef = firebase.storage()
.ref(`/issueImages/${firebaseImageId}`);
// perform the upload
await imageStorageRef.putString(dataUrl, 'DATA_URL');
// get the download URL
const imageStorageDownloadURL = await imageStorageRef.getDownloadURL();
// modern syntax
import { getStorage, getDownloadURL, ref, uploadString } from "firebase/storage";
// reference to file
const imageStorageRef = ref(
getStorage(),
`/issueImages/${firebaseImageId}`
);
// perform the upload
await uploadString(imageStorageRef, dataUrl, 'DATA_URL');
// get the download URL
const imageStorageDownloadURL = await getDownloadURL(imageStorageRef);
If Node is running on a private server you control, you should opt to use the Firebase Admin SDK instead as it bypasses the rate limits and restrictions applied to the client SDKs.
As mentioned before, the download URLs aren't created automatically. Unfortunately for us, getDownloadURL is a feature of the client SDKs and the Admin SDK doesn't have it. So we can either let a client call getDownloadURL when it is needed or we can manually create the download URL if we want to insert it into a database.
Nico has an excellent write up on how Firebase Storage URLs work, where they collated information from the Firebase Extensions GitHub and this StackOverflow thread. In summary, to create (or recreate) a download URL once it has been uploaded, you can use the following function:
import { uuid } from "uuidv4";
// Original Credit: Nico (#nicomqh)
// https://www.sentinelstand.com/article/guide-to-firebase-storage-download-urls-tokens
// "file" is an instance of the File class from the Cloud Storage SDK
// executing this function more than once will revoke all previous tokens
function createDownloadURL(file) {
const downloadToken = uuid();
await file.setMetadata({
metadata: {
firebaseStorageDownloadTokens: downloadToken
}
});
return `https://firebasestorage.googleapis.com/v0/b/${file.bucket.name}/o/${encodeURIComponent(file.name)}?alt=media&token=${downloadToken}`;
}
The allows us to change the client-side code above into the following so it can run using the Admin SDK:
// assuming firebase-admin is initialized already
import { getStorage } from "firebase-admin/storage";
// reference to file
const imageStorageFile = getStorage()
.bucket()
.file(`/issueImages/${firebaseImageId}`);
// perform the upload
await imageStorageFile.save(dataUrl);
// get the download URL
const imageStorageDownloadURL = await createDownloadURL(imageStorageFile);
In all of the above examples, a download URL is retrieved and saved to the imageStorageDownloadURL variable. You should store this value as-is in your database. However, if you instead want to store only the access token and reassemble the URL on an as-needed basis, you can extract the token from its ?token= parameter using:
const downloadToken = new URL(imageStorageDownloadURL).searchParams.get('token');
Just started exploring Google Cloud Vision APIs. From their guide:
const client = new vision.ImageAnnotatorClient();
const fileName = 'Local image file, e.g. /path/to/image.png';
const [result] = await client.textDetection(fileName);
However, I wanna use base64 representation of binary image data, since they claim that it's possible to use.
I found this reference on SO:
Google Vision API Text Detection with Node.js set Language hint
Instead of imageUri I used "content": stringas mentioned here. But the SO sample uses const [result] = await client.batchAnnotateImages(request);method. I tried using same technique on const [result] = await client.textDetection( method and it gave me an error.
So my question is: Is it possible to use base64 encoded string to represent image in order to perform TEXT_DETECTION ? And if so, how?
Any kind of help is highly appreciated.
You can use the quickstart guide and from there edit the lines after the creation of the client for the following:
// Value of the image in base64
const img_base64 = '/9j/...';
const request = {
image: {
content: Buffer.from(img_base64, 'base64')
}
};
const [result] = await client.textDetection(request);
console.log(result.textAnnotations);
console.log(result.fullTextAnnotation);
You can take a look at the function here, read the description of the request parameter, in particular the following part:
A dictionary-like object representing the image. This should have a
single key (source, content).
If the key is content, the value should be a Buffer.
Which leads to the structure used in the sample code from before. Opposed to when using imageUri or filename, which have to be inside of another object which key is source, as shown in the sample.
content field need to be Buffer.
You use the nodejs client library. The library use the grpc API internally, and grpc API expect bytes type at content field.
However, JSON API expect base64 string.
References
https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#image
https://googleapis.dev/nodejs/vision/latest/v1.ImageAnnotatorClient.html#textDetection
I created a bucket on s3 and added a HTML file, after this I created a Cloud Front key pair using my root user and added a Cloud Front Distribution for that bucket. Tried to access the object using that distribution and it worked, than I restricted access to the bucket using Behaviour and selected "self".
Finally I generated a signed url from node js and tested it using Postman.
The problem is that I always get AccessDenied.
<Error>
<Code>AccessDenied</Code>
<Message>Access denied</Message>
</Error>
Here is my code.
const cfsign = require("aws-cloudfront-sign");
var signingParams = {
keypairId: process.env.PUBLIC_KEY,
privateKeyPath: "./aws/Y3PA.pem",
expireTime: (new Date().getTime() + 999999999)
};
// Generating a signed URL
signedUrl = () => {
console.log("url created " + process.env.PUBLIC_KEY);
return cfsign.getSignedUrl(
"xxxx.cloudfront.net/test.html",
signingParams
);
}
The scheme is part of the URL that is required as input to the signature algorithm, so your error is lilely to be here:
cfsign.getSignedUrl("xxxx.cloudfront.net/...
Instead of that, you need this:
cfsign.getSignedUrl("https://xxxx.cloudfront.net/...
I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?
For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;
The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL
From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!
I followed this great article by Gaurav Mantri in order to upload files using HTML5/Javascript directly into blob storage.
http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/
However I am finding that during the upload portion this portion of his code fails with the 403 error.
And the funny thing is, this happens randomly. Sometimes the upload actually works and everything completes successfully however majority of the time it is failing with the 403 error.
One thing to note: I am hoping that CORS support will be added soon to Azure however for time being I am using Chrome (with the chrome.exe --disable-web-security option) to get around the issue.
PUT
https://mystorage.blob.core.windows.net/asset-38569007-3316-4350…Giv17ye4bocVWDbA/EQ+riNiG3wEGrFucbd1BKI9E=&comp=block&blockid=YmxvY2stMA==
403 (Server failed to authenticate the request. Make sure the value of
Authorization header is formed correctly including the signature.)
$.ajax({
url: uri,
type: "PUT",
data: requestData,
processData: false,
beforeSend: function(xhr) {
xhr.setRequestHeader('x-ms-blob-type', 'BlockBlob');
xhr.setRequestHeader('Content-Length', requestData.length);
},
success: function (data, status) {
console.log(data);
console.log(status);
bytesUploaded += requestData.length;
var percentComplete = ((parseFloat(bytesUploaded) / parseFloat(selectedFile.size)) * 100).toFixed(2);
$("#fileUploadProgress").text(percentComplete + " %");
uploadFileInBlocks();
},
error: function(xhr, desc, err) {
console.log(desc);
console.log(err);
}
});
I have put a 30-sec delay after creating the asset/locator/file in Azure before actually starting the upload portion in order to give time for the Locator to be propagated.
Any suggestion to what I could be missing?
Many thanks to Gaurav for pointing me in the direction of the issue.
It turns out that I was making JSON calls to the server which would create the assets/locators/policies and then return the upload uri back.
However my upload uri was of type Uri and when JSON serialized it, it didn't properly encode it.
After changing my uri object (on the server) to a string (and calling uploaduri = (new UriBuilder(theuri)).ToString(); ) the uri returned back to the web client was properly encoded and I no longer got the 403 errors.
So as a heads up to others, if you get this same issue, you may want to look at the encoding of your upload uri.
Gaurav here's the code I use to create the empty asset (w/ locator and file):
/// <summary>
/// Creates an empty asset on Azure and prepares it to upload
/// </summary>
public FileModel Create(FileModel file)
{
// Update the file model with file and asset id
file.FileId = Guid.NewGuid().ToString();
// Create the new asset
var createdAsset = this.Context.Assets.Create(file.AssetName.ToString(), AssetCreationOptions.None);
// Create the file inside the asset and set its size
var createdFile = createdAsset.AssetFiles.Create(file.Filename);
createdFile.ContentFileSize = file.Size;
// Create a policy to allow uploading to this asset
var writePolicy = this.Context.AccessPolicies.Create("Policy For Copying", TimeSpan.FromDays(365 * 10), AccessPermissions.Read | AccessPermissions.Write | AccessPermissions.List);
// Get the upload locator
var destinationLocator = this.Context.Locators.CreateSasLocator(createdAsset, writePolicy);
// Get the SAS Uri and save it to file
var uri = new UriBuilder(new Uri(destinationLocator.Path));
uri.Path += "/" + file.Filename;
file.UploadUri = uri.Uri;
// Return the updated file
return file;
}