I have a simple node.js application that tries to insert some data into BigQuery. It uses the provided gcloud node.js library.
The BigQuery client is created like this, according to the documentation:
google.auth.getApplicationDefault(function(err, authClient) {
if (err) {
return cb(err);
}
let bq = BigQuery({
auth: authClient,
projectId: "my-project"
});
let dataset = bq.dataset("my-dataset");
let table = dataset.table("my-table");
});
With that I try to insert data into BiqQuery.
table.insert(someRows).then(...)
This fails, because the BigQuery client returns a 403 telling me that the authentication is missing the required scopes. The documentation tells me to use the following snippet:
if (authClient.createScopedRequired &&
authClient.createScopedRequired()) {
authClient = authClient.createScoped([
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/bigquery.insertdata",
"https://www.googleapis.com/auth/cloud-platform"
]);
}
This didn't work either, because the if statement never executes. I skipped the if and set the scopes every time, but the error remains.
What am I missing here? Why are the scopes always wrong regardless of the authClient configuration? Has anybody found a way to get this or a similar gcloud client library (like Datastore) working with the described authentication scheme on a Container Engine pod?
The only working solution I found so far is to create a json keyfile and provide that to the BigQuery client, but I'd rather create the credentials on the fly then having them next to the code.
Side note: The node service works flawless without providing the auth option to BigQuery, when running on a Compute Engine VM, because there the authentication is negotiated automatically by Google.
baking JSON-Keyfiles into the images(containers) is bad idea (security wise [as you said]).
You should be able to add these kind of scopes to the Kubernetes Cluster during its creation (cannot be adjusted afterwards).
Take a look at this doc "--scopes"
Related
Some gcloud commands don't have API or client library support (for example - this one).
In these cases, is there a simple way to run gcloud commands from a nodejs application?
The gcloud endpoints service commands for IAM policy are difficult for me to check quickly but, if IIRC (and if this is similar to gcloud projects commands for IAM policy), it's not that there's no API, but that there's no single API call.
What you can always do with gcloud is append --log-http to see what happens beneath the covers. With IAM policy mutations (off-top-of-head), you get the policy, mutate it, and then apply the changes back using the etag the GET gave you. The backend checks the policy's state (the etag is like a hash of the policy) and, if it's unchanged, you can make the change.
If this is what's happening here, you should be able to repro the functionality in NodeJS using the existing (!) APIs and, if you're using API Client Libraries (rather than Cloud Client libraries), the functionality will be available.
Apart from the complexity involved in shelling out to gcloud, you'll need to also authenticate it and then you'll need to (un)marshal data to the shell and manage errors. Ergo, it's messy and generally discouraged.
In node.js ,we have child_process module. As the name suggests the child_process provides function like spawn or exec that creates new child process that executes shell command like independent process. spawn is a function that takes the main command as
first argument and other command line options as an array values in place of second parameter.
So with respect to link that you share, you might end-up writing something like this :
const { spawn } = require("child_process");
const listening = spawn('gcloud', ['endpoints', 'services', 'blah', '--option','someValue']);
listening.stdout.on("data", data => {
console.log(`stdout: ${data}`);
});
listening.stderr.on("data", data => {
console.log(`stderr: ${data}`);
});
listening.on('error', (error) => {
console.log(`error: ${error.message}`);
});
References :
https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options
I'm not sure this directly answers your question but there is an npm package that can help you run unix commands from within the app.
Check out shell.js
I've got some problems while calling the function <async> getUserContext(name, checkPersistence), so far I know it fails only when I try to get the context of an existing user, but it does work with non existing users.
The code I use to create the client object and call the function is as follows:
const config = '-connection-profile-path';
const client = hfc.loadFromConfig(hfc.getConfigSetting(`network${config}`));
client.loadFromConfig(hfc.getConfigSetting(`${this.orgId}${config}`));
await client.initCredentialStores();
const admin = await client.getUserContext('admin', true);
And the error is:
[2019-10-04 11:41:38.701] [ERROR] utils/registerUser - TypeError:
Cannot read property 'curve' of undefined
which, as far as I know, makes no sense. The only solution I`ve found is to check if the certificates are up to date, that is deleting it and let runApp.sh & setupAPIs.sh (I'm using the balance-transfer example from hyperledger/fabric-samples) create the folder again with all those certificates.
I found that the only problem was that the users inside the docker container I was using were desynced with the ones in the State Store. So if you are using docker make sure to keep the users synchronized.
I am expriencing high latency when fetching users from active directory in nodejs.
I am using the node library from npm 'activedirectry'. https://www.npmjs.com/package/activedirectory
The amount of users is relatively not big, About 1000 users...
The time of the query takes between 2 to 4 seconds.
The default query provided by the function findUsers of the 'activedirectory' library is (&(|(objectClass=user)(objectClass=person))(!(objectClass=computer))(!(objectClass=group))).
I added an additional filter on the sAMAccountName field. sAMAccountName=*somePartOfName*
In any case, with or without my addition the query time is still slow.
I don't have full configuration of the active directory server,
But it seems like other platforms on the same network work faster with the active directory but they work with other frameworks, in java and .NET.
What could be the reason for this high latency?
Thanks
// ad is configured only with user, password, base dn and url
function findUsers(partOfsAMAccountName) {
const additionalQuery = `sAMAccountName=*${partOfsAMAccountName}*`;
return new Promise(resolve => {
ad.findUsers(additionalQuery, false, (error, users) => {
if(error) {
console.error('%j', error);
}
resolve(users || []);
})
}
}
What I am tring to do is to create an autocomplete mechanism based on the usernames of the active directory.
On the same network we have bitbucket server connected to the same active directory server. It seems like from the bitbucket client the autocomplete is much faster. about 1 sec from the client side.
I had already searched for the open souce of bitbucket but didn't find any.
I have no idea about node.js, and have never programmed in the same. But, from the question I feel that default query provided for findUsers() might be the culprit.
As per Microsoft Docs, LDAP filter for
(&(objectClass=user)(objectCategory=person))
is sufficient for determining the users.
In the official documentation for function findUsers(opts, callback), I can see the argument opts description about Optional parameters to extend or override functionality.
So, I think you can override the LDAP filter query using the opts argument in findUsers function to keep the above advised LDAP filter, and additionally put your sAMAccountName condition in the search query. Please explore on how to override the opts argument, as I can't help you with that.
I am hopeful that the result would be comparatively faster after doing the search this way.
I'm using a standalone Parse server, trying to send a push notification to multiple Installations.
Parse Server won't let me query the Installation collection from Cloud Code, returning the following error:
Error handling request: ParseError {
code: 119,
message: 'Clients aren\'t allowed to perform the find operation on the installation collection.' } code=119, message=Clients aren't allowed to perform the find operation on the installation collection.
The query in Cloud Code looks like this:
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.containedIn('user', users);
pushQuery.find({ ...
What's the proper way to get a list of Installations for a set of Users and send pushes to all of them?
I've also tried to get Cloud Code to use the masterKey by calling Parse.Cloud.useMasterKey(); immediately before the query. No effect and the master key is not included in the query request headers.
This is because Parse.Cloud.useMasterKey() is deprecated since Parse-server version 2.3.0. You now need to make use of useMasterKey: true in your query.
Eg:
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.containedIn('user', users);
pushQuery.find({useMasterKey: true }).then(function(results) {
I have cloned the Concept Insights demo from Bluemix and made some minor changes to use my own corpus. It runs OK locally, but when I deploy it to Bluemix I get an authorization error when it tries to access my corpus. I'm certain that the error is a result of the early call in app.js to bluemix.getServiceCreds('concept_insights'), which apparently replaces my service credentials with some that must be stored in the environment on Bluemix.
Can someone explain the purpose of this function, and the proper approach to what I am trying to do? I could probably just delete the call to that function, but I'm afraid that I may be missing part of the larger picture if I do. Is this a way to keep my credentials out of the code base? If so, how do I make it work?
bluemix.getServiceCreds('concept_insights') gets the concept_insights service credentials from the VCAP_SERVICES variable that is created by Bluemix. (see VCAP_SERVICES)
You probably want to use the credentials from the environment instead of hardcoding them in your app.js file.
When your app runs locally you hardcode the credentials in app.js, but when it runs in Bluemix those credentials are overwritten. If you don't want this to happen remove the bluemix.getServiceCreds('concept_insights')
var credentials = {
url: 'https://gateway.watsonplatform.net/concept-insights/api',
username: '<username>',
password: '<password>',
version: 'v2'
};
When creating a service make sure you use the Standard plan.
If you use the Beta plan you will have to use https://gateway.watsonplatform.net/concept-insights/api as url.