From Cloud Code, how can I query for Installations matching a set of Users? - parse-cloud-code

I'm using a standalone Parse server, trying to send a push notification to multiple Installations.
Parse Server won't let me query the Installation collection from Cloud Code, returning the following error:
Error handling request: ParseError {
code: 119,
message: 'Clients aren\'t allowed to perform the find operation on the installation collection.' } code=119, message=Clients aren't allowed to perform the find operation on the installation collection.
The query in Cloud Code looks like this:
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.containedIn('user', users);
pushQuery.find({ ...
What's the proper way to get a list of Installations for a set of Users and send pushes to all of them?
I've also tried to get Cloud Code to use the masterKey by calling Parse.Cloud.useMasterKey(); immediately before the query. No effect and the master key is not included in the query request headers.

This is because Parse.Cloud.useMasterKey() is deprecated since Parse-server version 2.3.0. You now need to make use of useMasterKey: true in your query.
Eg:
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.containedIn('user', users);
pushQuery.find({useMasterKey: true }).then(function(results) {

Related

Updates not possible with Prisma and CosmosDB with MongoDB API

I was thrilled to learn about Prisma recently and quickly replaced mongoose in my latest project.
Integration was easy & connections are working fine with CosmosDB by using the connection strings.
My issue: seems like I can't update any data, as CosmosDB is throwing a Raw Error:
Invalid `prisma.addresses.update()` invocation:
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind: RawError { code: "unknown", message: "Command failed (BadValue): Expected type object but found array.)" } })
I'm running the latest MongoDB server version that is available on Azure (4.0) and the update is really basic:
await this.prisma.addresses.update({
where: {
id: 'something',
},
data: {
city: 'Something'
}
})
Querying and creating documents hasn't caused any issues.
Depends on the schema you need to pass the parameters to your method. Whether it is update, create or delete.
Check for schema.prisma under your prisma folder, look for address and check for update parameter.
If you want to understand more about different use cases, you can refer to this doc
Also check this blog to get the insights about node.js and prisma.

SAP Cloud SDK for javascript using the destination

I have followed the Tutorial and build the basic CF based nodejs applciation to display all BusinessPartners from my S/4HANA on-premise destination.
function getAllBusinessPartners(): Promise<BusinessPartner[]> {
return BusinessPartner.requestBuilder()
.getAll()
.execute({
destinationName: 'MockServer'
});
}
Destination is configured with the Virtual host from cloud connector.
But after deploying to the Cloud Foundry, i get following error for the GET request
{"message":"Service of type destination is not supported! Consider providing your own transformation function when calling destinationForServiceBinding, like this:\n destinationServiceForBinding(yourServiceName, { serviceBindingToDestination: yourTransformationFunction });","level":"warn","custom_fields":{"package":"core","messageContext":"destination-accessor"},"logger":"sap-cloud-sdk-logger","timestamp":"2020-03-09T18:15:41.856Z","msg":"Service of type destination is not supported! Consider providing your own transformation function when calling destinationForServiceBinding, like this:\n destinationServiceForBinding(yourServiceName, { serviceBindingToDestination: yourTransformationFunction });","written_ts":1583777741856,"written_at":"2020-03-09T18:15:41.856Z"}
The application is already bound to the Destination service as well.
Can someone help me here, what went wrong ? or the approach to use destination is different in the new version of Cloud-SDK ?
After lot of attempts, i have made this to work.
My Observations:
Connectivity service is also required to be bound, when using on-premise S4 backend.
There was no errors in the log, i have made certain modification in the code to use async/await
async function getAllBusinessPartners(): Promise<BusinessPartner[]> {
return await BusinessPartner.requestBuilder()
.getAll()
.execute({
destinationName: 'MockServer'
});
}
After this modification, when I hit the GET request, it gave me the following error:
"Failed to get business partners - get request to http://s4h-scc-basic:500/sap/opu/odata/sap/API_BUSINESS_PARTNER/sap/opu/odata/sap/API_BUSINESS_PARTNER failed!"
Could notice that the suffix after the http://domain:port is twice. One I gave in the destination, and the other VDM adds automatically.
Ideally, this error is supposed to be thrown even before adding async/await.
After removing the suffix from the destination, it started to work.
If your request really does error, what you posted here from your logs is most likely not the reason for the failure. We are aware that this message is confusing and will improve it (https://github.com/SAP/cloud-sdk/pull/32).
Can you check whether there are more errors in your logs? Based on the code you posted and the setup you described, this should work. Do you have a binding to the XSUAA service.

Trying to insert data into BigQuery fails from container engine pod

I have a simple node.js application that tries to insert some data into BigQuery. It uses the provided gcloud node.js library.
The BigQuery client is created like this, according to the documentation:
google.auth.getApplicationDefault(function(err, authClient) {
if (err) {
return cb(err);
}
let bq = BigQuery({
auth: authClient,
projectId: "my-project"
});
let dataset = bq.dataset("my-dataset");
let table = dataset.table("my-table");
});
With that I try to insert data into BiqQuery.
table.insert(someRows).then(...)
This fails, because the BigQuery client returns a 403 telling me that the authentication is missing the required scopes. The documentation tells me to use the following snippet:
if (authClient.createScopedRequired &&
authClient.createScopedRequired()) {
authClient = authClient.createScoped([
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/bigquery.insertdata",
"https://www.googleapis.com/auth/cloud-platform"
]);
}
This didn't work either, because the if statement never executes. I skipped the if and set the scopes every time, but the error remains.
What am I missing here? Why are the scopes always wrong regardless of the authClient configuration? Has anybody found a way to get this or a similar gcloud client library (like Datastore) working with the described authentication scheme on a Container Engine pod?
The only working solution I found so far is to create a json keyfile and provide that to the BigQuery client, but I'd rather create the credentials on the fly then having them next to the code.
Side note: The node service works flawless without providing the auth option to BigQuery, when running on a Compute Engine VM, because there the authentication is negotiated automatically by Google.
baking JSON-Keyfiles into the images(containers) is bad idea (security wise [as you said]).
You should be able to add these kind of scopes to the Kubernetes Cluster during its creation (cannot be adjusted afterwards).
Take a look at this doc "--scopes"

MongoDB + node: not authorized to execute command (sometimes works, sometimes doesn't)

I'm facing a problem with my MongoDB environment - the setup is as follows:
My node app provides a restify API which handles user registration (look up if a user exists in a collection based on his mail, and if not, insert him (note - insert uses bcrypt to hash the passwords, so probably is a bit slower)). It uses restify and Mongoose ORM.
A second benchmark script (also written in node, running on the same machine) accesses this restify API using HTTP PUT.
I'm starting around 20-30 of these requests in the benchmark (with random data) and only some of the API requests correctly insert the new users. For the other, MongoDB produces errors similar to the following:
not authorized on ... to execute command { find: "users", filter: { mail: "rroouksl#hddngrau.de" } }
not authorized on ... to execute command { insert: "users", documents: [ { ... } ], ordered: false, writeConcern: { w: 1 } }
Some other users get inserted perfectly fine. Especially with a low number of requests at the same time (1-5) no problems occur. Shouldn't Mongo be able to handle these "low" amount of requests? Is it a problem because it's running on the same machine? Hasn't the user I created in Mongo for this project got enough txns/second allowed?
Best regards,
Zahlii
It turned out that mongo was still using the "old" storage engine and not WiredTiger. Since my queries included updating records, the old engine performed collection-based locks which means that the errors were solely based on read-write locks.
I migrated to WiredTiger which performs document-based locking and since then, the database handles many parallel requests without these errors (although sometimes under heavy load they appear again - but this is part of mongo being NoSQL I guess)
You can try:
Db.authenticate(user, password, function(err, res) {
// callback
});
Also see the source.

Globle error filter in node js

I am working on node from last 2-3 months on a project. Now I want to handle errors from a single point in node. For example : I have several api functions in my project. Many of them are taking _id as an api input. I need to parse this id using mongoose objectid before using in query. Now if the format of _id is not valid, it will throw the casting error. It could be handled by mongoose object isvalid property. But my purpose is that, at any place if it is not handled in code I want to catch the error and log it to my log file and send a common message like 'error occurs' to the UI. I want to add a common error handler for all the api that do the logging and error handling for my api, like we use .net MVC - error handler filer through the application.
I have tried using domain. But in domain.on('error',func(err){}); it is not working. I put my api functions call in domain.run();
If any body have any suggestion for me, please let me know.
Take a look at the domain module, if your app powered by express you can use the package - express-domain-middleware

Resources