So here is what I got: I created two certificates both a signerCertificate and a signerKey and converted them to .pem files. They work, and I am able to sign the pass in the console on my Mac terminal to create a pkpass. So the certificates work and I need to add this to my app and edit some of the fields in the pass so I decided to use https://github.com/walletpass/pass-js to setup my backend certificate signage. So I implemented all of the necessary documentation steps and before you know it, I could create what I think is a pass bc it prints in our console logs as seen in below code.
So here is where the problem starts. I am trying to return the pass data to the front end as seen below. It does not cast as Data or NSData no matter what variation I return from their documentation on the front end. So what is my goal? I have generated the pass template and it returns some sort of pass data, but Im stuck trying to cast it as data in the front end to make it into a pkpass that I can open for users in the app! Me and my coworker have tried everything we can to figure this out but we are at a dead end.
If you have any advice please leave it below. Please see all the attached screenshots and code below as well.
NODE JS Api Call
const { Template } = require("#walletpass/pass-js");
exports.createTicket = functions.https.onCall(async (data, context) => {
const key = "-----BEGIN CERTIFICATE-----MIIF/jCCBOagaeYE=-----END CERTIFICATE-----"
const template = new Template("eventTicket", {
passTypeIdentifier: "pass.com.passllc.eventCard",
teamIdentifier: "ABCDEFGHI",
organizationName : "Scenes",
backgroundColor: "red",
sharingProhibited: true
});
await template.images.add("icon", "./models/icon.png");
await template.images.add("logo", "./models/logo.png");
template.setCertificate(key);
template.setPrivateKey("-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFDj0I=-----END ENCRYPTED PRIVATE KEY-----", "passphrase");
const pass = template.createPass({
serialNumber: "123456",
description: "20% off"
});
console.log(pass);
return {
status: '200',
type: "application/vnd.apple.pkpass",
body: await pass.asBuffer(),
};
});
Front End Code
functions.httpsCallable("createTicket").call(["test": "dummytext"]) { (result, error) in
if let data = result?.data as? [String: Any] {
let jsonData = try? JSONSerialization.data(withJSONObject: data)
do {
var passError: NSError?
var newPass = try PKPass(data: jsonData as! Data, error: &passError)
print("pkpassage: \(newPass)")
let addController = PKAddPassesViewController(pass: newPass)
addController.delegate = self
self.present(addController, animated: true)
} catch {
}
}
if (error as NSError?) != nil {
print("\(error?.localizedDescription) lower error")
}
}
What is currently outputting
Front end console (Xcode)
errorOutput
Backend Console (Server logs) just a small snippet of the whole log for an individual call
console server log beginning;
console server log end
I also tried both of these cases btw and they still cannot convert to something to cast as a pkpass:
creating the file for api
I would also like to add when I print the returned pass in front end Xcode console as string: any it prints something like [234434, 45549, 3828].
As you can see it looks like a pass is generating because the console is logging it, just I cant get it to convert into data that I can convert to a pkpass. I really appreciate any help, thank you guys!
Related
I'm attempting to refactor the "Node.JS PowerBI App Owns Data for Customers w/ Service Principal" code example (found HERE).
My objective is to import the data for the "config.json" from a table in my database and insert the "workspaceId" and "reportId" values from my database into the "getEmbedInfo()" function (inside the "embedConfigServices.js" file). Reason being, I want to use different configurations based on user attributes. I am using Auth0 to login users on the frontend, and I am sending the user metadata to the backend so that I can filter the database query by the user's company name.
I am able to console.log the config data, but I am having difficulty figuring out how to insert those results into the "getEmbedInfo()" function.
It feels like I'm making a simple syntax error somewhere, but I am stuck. Here's a sample of my code:
//----Code Snippet from "embedConfigServices.js" file ----//
async function getEmbedInfo() {
try {
const url = ;
const set_config = async function () {
let response = await axios.get(url);
const config = response.data;
console.log(config);
};
set_config();
const embedParams = await getEmbedParamsForSingleReport(
config.workspaceId,
config.reportId
);
return {
accessToken: embedParams.embedToken.token,
embedUrl: embedParams.reportsDetail,
expiry: embedParams.embedToken.expiration,
status: 200,
};
} catch (err) {
return {
status: err.status,
error: err.statusText,
}
};
}
}
This is the error I am receiving on the frontend:
"Cannot read property 'get' of undefined"
Any help would be much appreciated. Thanks in advance.
Carlos
The error is because of fetching wrong URL. The problem is with the config for the Service Principal. We will need to provide reportId, workspaceId for the SPA and also make sure you added the service principal to workspace and followed all the steps from the below documentation for the service principal authentication.
References:
https://learn.microsoft.com/power-bi/developer/embedded/embed-service-principal
I've been implementing stripe into my app with typescript and I have the following function:
exports.deleteStripeCustomer = functions.https.onCall((data, context) => {
const deleted = await stripe.customers.del(
"I need to add the customers ID here"
);
});
In that string where I need to add the customerID, I would like to add the customer ID from my swift code where I'm already listening for that ID and add it to the function when it's called.
In swift this would be done like this:
func sayHello(name: string) {
print("hello \(name)")
}
and then when the function is called the id is passed in from somewhere else like this:
sayHello(name: usersName)
any idea how this is done in typescript?
Thanks in advance, any help is appreciated!
In the TypeScript function, you would simply get it from the data parameter.
exports.deleteStripeCustomer = functions.https.onCall((data, context) => {
const userId = data.userId
});
And in the Swift call, you would pass it like so:
let payload = [
"userId": "u456"
]
SomeAPI.httpsCallable("deleteStripeCustomer").call(payload) { (_, error) in
if let error = error {
print(error)
}
}
I've never worked with Stripe and I don't know how their client-side API is configured (or server-side for that matter) but I assume it's like other TypeScript-Swift inter-operations I've worked with in the past and this is how it's done there. If you link me to their documentation about this API I can help further but generally, since you asked generally, this is how it's done.
I am trying to write a React app that grabs a frame from the webcam and passes it to the Azure Face SDK (documentation) to detect faces in the image and get attributes of those faces - in this case, emotions and head pose.
I have gotten a modified version of the quickstart example code here working, which makes a call to the detectWithUrl() method. However, the image that I have in my code is a bitmap, so I thought I would try calling detectWithStream() instead. The documentation for this method says it needs to be passed something of type msRest.HttpRequestBody - I found some documentation for this type, which looks like it wants to be a Blob, string, ArrayBuffer, or ArrayBufferView. The problem is, I don't really understand what those are or how I might get from a bitmap image to an HttpRequestBody of that type. I have worked with HTTP requests before, but I don't quite understand why one is being passed to this method, or how to make it.
I have found some similar examples and answers to what I am trying to do, like this one. Unfortunately they are all either in a different language, or they are making calls to the Face API instead of using the SDK.
Edit: I had forgotten to bind the detectFaces() method before, and so I was originally getting a different error related to that. Now that I have fixed that problem, I'm getting the following error:
Uncaught (in promise) Error: image must be a string, Blob, ArrayBuffer, ArrayBufferView, or a function returning NodeJS.ReadableStream
Inside constructor():
this.detectFaces = this.detectFaces.bind(this);
const msRest = require("#azure/ms-rest-js");
const Face = require("#azure/cognitiveservices-face");
const key = <key>;
const endpoint = <endpoint>;
const credentials = new msRest.ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } });
const client = new Face.FaceClient(credentials, endpoint);
this.state = {
client: client
}
// get video
const constraints = {
video: true
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
let videoTrack = stream.getVideoTracks()[0];
const imageCapture = new ImageCapture(videoTrack);
imageCapture.grabFrame().then(function(imageBitmap) {
// detect faces
this.detectFaces(imageBitmap);
});
})
The detectFaces() method:
async detectFaces(imageBitmap) {
const detectedFaces = await this.state.client.face.detectWithStream(
imageBitmap,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
console.log (detectedFaces.length + " face(s) detected");
});
Can anyone help me understand what to pass to the detectWithStream() method, or maybe help me understand which method would be better to use instead to detect faces from a webcam image?
I figured it out, thanks to this page under the header "Image to blob"! Here is the code that I added before making the call to detectFaces():
// convert image frame into blob
let canvas = document.createElement('canvas');
canvas.width = imageBitmap.width;
canvas.height = imageBitmap.height;
let context = canvas.getContext('2d');
context.drawImage(imageBitmap, 0, 0);
canvas.toBlob((blob) => {
// detect faces
this.detectFaces(blob);
})
This code converts the bitmap image to a Blob, then passes the Blob to detectFaces(). I also changed detectFaces() to accept blob instead of imageBitmap, like this, and then everything worked:
async detectFaces(blob) {
const detectedFaces = await this.state.client.face.detectWithStream(
blob,
{
returnFaceAttributes: ["Emotion", "HeadPose"],
detectionModel: "detection_01"
}
);
...
}
The DocumentDB APIs for working with stored procedures take an optional RequestOptions argument that has, among others, the property EnableScriptLogging.
The help page for it is useless. The description for it is:
EnableScriptLogging is used to enable/disable logging in JavaScript stored procedures.
Mkay... so how do I log something? (assuming that's console.log(...))
And more importantly, how do I read the logs generated by stored procedures?
I was expecting the response of requests to stored procedures would somehow contain the logs, but can't find anything.
Yes, this is for console.log from script. Must be enabled by client (turned off by default, so that console.log in script is ignored), essentially this set http-header on request. In script you can do something like this:
function myScript() {
console.log("This is trace log");
}
The log will be in response header (x-ms-documentdb-script-log-results), as well as can be accessible in SDK.
If you use C# SDK, you can use it like this:
var options = new RequestOptions { EnableScriptLogging = true };
var response = await client.ExecuteStoredProcedureAsync<string>(sprocUri, options);
Console.WriteLine(response.ScriptLog);
If you use node.js SDK:
var lib = require("documentdb");
var Client = lib.DocumentClient;
var client = new Client("https://xxxxxxx.documents.azure.com:443/", { masterKey: "xxxxxxxxxxxx" });
var sprocLink = ...;
client.executeStoredProcedure(sprocLink, "input params", { partitionKey: {}, enableScriptLogging: true }, function(err, res, responseHeaders) {
console.log(responseHeaders[lib.Constants.HttpHeaders.ScriptLogResults]);
}
Current limitations:
only enabled for stored procedures
\n is not supported (should be fixed soon)
not supported when script throws unhandled exception (should be fixed soon)
I am using the Forge data management API to access my A360 files and aim to translate them into the SVF format so that I can view them in my viewer. So far I have been able to reach the desired item using the ForgeDataManagement.ItemsApi, but I don't know what to do with the item to upload it to the bucket in my application.
From the documentation it seems like uploadObject is the way to go (https://github.com/Autodesk-Forge/forge.oss-js/blob/master/docs/ObjectsApi.md#uploadObject), but I don't know exactly how to make this function work.
var dmClient = ForgeDataManagement.ApiClient.instance;
var dmOAuth = dmClient.authentications ['oauth2_access_code'];
dmOAuth.accessToken = tokenSession.getTokenInternal();
var itemsApi = new ForgeDataManagement.ItemsApi();
fileLocation = decodeURIComponent(fileLocation);
var params = fileLocation.split('/');
var projectId = params[params.length - 3];
var resourceId = params[params.length - 1];
itemsApi.getItemVersions(projectId, resourceId)
.then (function(itemVersions) {
if (itemVersions == null || itemVersions.data.length == 0) return;
// Use the latest version of the item (file).
var item = itemVersions.data[0];
var contentLength = item.attributes.storageSize;
var body = new ForgeOSS.InputStream();
// var body = item; // Using the item directly does not seem to work.
// var stream = fs.createReadStream(...) // Should I create a stream object lik suggested in the documention?
objectsAPI.uploadObject(ossBucketKey, ossObjectName, contentLength, body, {}, function(err, data, response) {
if (err) {
console.error(err);
} else {
console.log('API called successfully. Returned data: ' + data);
//To be continued...
}
I hope someone can help me out!
My current data:
ossObjectName = "https://developer.api.autodesk.com/data/v1/projects/"myProject"/items/urn:"myFile".dwfx";
ossBucketKey = "some random string based on my username and id";
Regards,
torjuss
When using the DataManagement API, you can either work with
2 legged oAuth (client_credentials) and access OSS' buckets and objects,
or 3 legged (authorization_code) and access a user' Hubs, Projects, Folders, Items, and Revisions
When using 3 legged, you do access someones content on A360, or BIM360 and these files are automatically translated by the system, so you do not need to translate them again, not to transfer them on a 2 legged application bucket. The only thing you need to do it is get the manifest of the Item or its revision and use the URN to see it in the viewer.
Checkout an example here: https://developer.autodesk.com/en/docs/data/v2/reference/http/projects-project_id-versions-version_id-GET/
you'll see something like
Examples: Successful Retrieval of a Specific Version (200)
curl -X GET -H "Authorization: Bearer kEnG562yz5bhE9igXf2YTcZ2bu0z" "https://developer.api.autodesk.com/data/v1/projects/a.45637/items/urn%3Aadsk.wipprod%3Adm.lineage%3AhC6k4hndRWaeIVhIjvHu8w"
{
"data": {
"relationships": {
"derivatives": {
"meta": {
"link": {
"href": "/modelderivative/v2/designdata/dXJuOmFkc2sud2lwcWE6ZnMuZmlsZTp2Zi50X3hodWwwYVFkbWhhN2FBaVBuXzlnP3ZlcnNpb249MQ/manifest"
}
},
Now, to answer teh other question about upload, I got an example available here:
https://github.com/Autodesk-Forge/forge.commandline-nodejs/blob/master/forge-cb.js#L295. I copied the relevant code here for everyone to see how to use it:
fs.stat (file, function (err, stats) {
var size =stats.size ;
var readStream =fs.createReadStream (file) ;
ossObjects.uploadObject (bucketKey, fileKey, size, readStream, {}, function (error, data, response) {
...
}) ;
}) ;
Just remember that ossObjects is for 2 legged, where as Items, Versions are 3 legged.
We figured out how to get things working after some support from Adam Nagy. To put it simple we had to do everything by use of 3 legged OAuth since all operations involves a document from an A360 account. This includes accessing and showing the file structure, translating a document to SVF, starting the viewer and loading the document into the viewer.
Also, we were targeting the wrong id when trying to translate the document. This post shows how easily it can be done now, thanks to Adam for the info!