I've got some problems while calling the function <async> getUserContext(name, checkPersistence), so far I know it fails only when I try to get the context of an existing user, but it does work with non existing users.
The code I use to create the client object and call the function is as follows:
const config = '-connection-profile-path';
const client = hfc.loadFromConfig(hfc.getConfigSetting(`network${config}`));
client.loadFromConfig(hfc.getConfigSetting(`${this.orgId}${config}`));
await client.initCredentialStores();
const admin = await client.getUserContext('admin', true);
And the error is:
[2019-10-04 11:41:38.701] [ERROR] utils/registerUser - TypeError:
Cannot read property 'curve' of undefined
which, as far as I know, makes no sense. The only solution I`ve found is to check if the certificates are up to date, that is deleting it and let runApp.sh & setupAPIs.sh (I'm using the balance-transfer example from hyperledger/fabric-samples) create the folder again with all those certificates.
I found that the only problem was that the users inside the docker container I was using were desynced with the ones in the State Store. So if you are using docker make sure to keep the users synchronized.
Related
I use Thycotic's Secret Server and am utilizing their API to query the password field of a secret. While using the API, I am receiving a SDK client path is invalid error.
Once I configured the connection (you only need to do it once according to the API documentation located at https://github.com/thycotic/secret-server-python), I then execute the query using:
from secret_server.sdk_client import SDK_Client
client = SDK_Client()
akey = client.commands.get_secret(1234, field='password').strip()
skey = client.commands.get_secret(4321, field='password').strip()
I expected to simply get my secrets when I print my akey and skey variables, but instead I get the error:
raise ValueError('SDK client path is invalid')
ValueError: SDK client path is invalid
What's interesting is that I have to run the configure command again to get it to work. I figure that's because when you actually go to configure the connection you specify the full path to your SDK client and you don't have to after you've already done it once.
Regardless, when I do, I get a machine is already initialized (as expected) response, and it then works. But you shouldn't have to do that. Plus, that would be an issue because the onboarding key will be stored in the code, and we can't have that.
Any suggestions?
I had a similar error; except I never got past the invalid SDK path.
I solved it by doing this:
Script A) init as outlined in the sdk documentation (only run once)
Script B)
client = SDK_Client()
client.config.SDK_CONFIG["path"] = path_to_my_sdk
key = client.commands.get_secret(1234, field = 'password')
Adding that middle line is what resolved the problem. I'll log this as an issue in the github that the init process apparently is not saving the path.
I having the same error icrosoft.Azure.Documents.DocumentClientException: Message: {"Errors":["Owner resource does not exist"]} , this is my scenario. When I deployed my webapp to Azure and try to get some document from docDb it throws this error. The docdb exists in azure and contains the document that i looking for.
The weird is that from my local machine(running from VS) this works fine. I'm using the same settings in Azure and local. Somebody have and idea about this.
Thanks
Owner resource does not exist
occurs when you have given a wrong database name.
For example, while reading a document with client.readDocument(..), where the client is DocumentClient instance, the database name given in docLink is wrong.
This error for sure appears to be related to reading a database/collection/document which does not exist. I got the same exact error for a database that did exist but I input the name as lower case, this error appears to happen regardless of your partition key.
The best solution I could come up with for now is to wrap the
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, "documentid"));
call in a try catch, not very elegant and I would much rather the response comes back with some more details but this is Microsoft for ya.
Something like the below should do the trick.
Model myDoc = null;
try
{
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, document));
myDoc = (Model )(dynamic)response.Resource;
}
catch { }
if (myDoc != null)
{
//do your work here
}
That is to get a better idea of the error then create the missing resource so you dont get the error anymore.
A few resources I had to go through before coming to this conclusion:
https://github.com/DamianStanger/DocumentDbDemo
Azure DocumentDB Read Document Resource Not Found
I had the same problem. I found that Visual Studio 2017 is publishing using my Release configuration instead of the Test configuration I've selected. In my case the Release configuration had a different CosmosDB database name that doesn't exist, which resulted in the "owner resource does not exist" error when I published to my Azure test server. Super frustrating and a terrible error message.
It may also be caused by a not found attachment to a document.
This is a common scenario when you move cosmos db contents using Azure Cosmos DB Data Migration tool which moves all the documents with their complete definition but unfortunately not the real attachment content.
Therefore this result in a document that states it has an attachment, and also states the attachment link, but at that given link no attachment can be found because the tool have not moved it.
Now i wrap my code as follow
try{
var attachments = client.CreateAttachmentQuery(attacmentLink, options);
[...]
}
catch (DocumentClientException ex)
{
throw new Exception("Cannot retrieve attachment of document", ex);
}
to have a meaningful hint of what is going on.
I ran into this because my dependency injection configuration had two instances of CosmosClient being created for different databases. Thus any code trying to query the first database was running against the second database.
My fix was to create a CosmosClientCollection class with named instances of CosmosClient
I have a simple node.js application that tries to insert some data into BigQuery. It uses the provided gcloud node.js library.
The BigQuery client is created like this, according to the documentation:
google.auth.getApplicationDefault(function(err, authClient) {
if (err) {
return cb(err);
}
let bq = BigQuery({
auth: authClient,
projectId: "my-project"
});
let dataset = bq.dataset("my-dataset");
let table = dataset.table("my-table");
});
With that I try to insert data into BiqQuery.
table.insert(someRows).then(...)
This fails, because the BigQuery client returns a 403 telling me that the authentication is missing the required scopes. The documentation tells me to use the following snippet:
if (authClient.createScopedRequired &&
authClient.createScopedRequired()) {
authClient = authClient.createScoped([
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/bigquery.insertdata",
"https://www.googleapis.com/auth/cloud-platform"
]);
}
This didn't work either, because the if statement never executes. I skipped the if and set the scopes every time, but the error remains.
What am I missing here? Why are the scopes always wrong regardless of the authClient configuration? Has anybody found a way to get this or a similar gcloud client library (like Datastore) working with the described authentication scheme on a Container Engine pod?
The only working solution I found so far is to create a json keyfile and provide that to the BigQuery client, but I'd rather create the credentials on the fly then having them next to the code.
Side note: The node service works flawless without providing the auth option to BigQuery, when running on a Compute Engine VM, because there the authentication is negotiated automatically by Google.
baking JSON-Keyfiles into the images(containers) is bad idea (security wise [as you said]).
You should be able to add these kind of scopes to the Kubernetes Cluster during its creation (cannot be adjusted afterwards).
Take a look at this doc "--scopes"
I'm talking to Nest Cloud API using Nodejs using firebase node module. I'm using accessToken that I got from https://api.home.nest.com/oauth2/access_token, and this seems to work. My Nest user account got prompted to "accept" this request, which I did and my app is listed in the Nest Account "Works with Nest" page so all looks good. I use this accessToken in call to authWithCustomToken and that works and my Nodejs application requested read/write permission (See https://developers.nest.com/products/978ea6e2-c301-4dff-8b38-f63d80757162). And reading the Nest thermostat properties from https://developer-api.nest.com/devices/thermostats/[deviceid] works, but when I try and write to hvac_mode like this:
this.firebaseRef = new Firebase("https://developer-api.nest.com");
this.myNestThermostat = this.firebaseRef.child("devices/thermostats/"+deviceId);
this.myNestThermostat.set("{'hvac_mode': 'off'}", function (error) { ... }
and this always returns:
FIREBASE WARNING: set at /devices/thermostats/fwxNBtjaok6KZJbSXhf2azuBmGSvkcjK failed: No write permission(s) for field(s): /devices/thermostats/fwxNBtjaok6KZJbSXhf2azuBmGSvkcjK
(where the deviceId is what I see when I enumerate my devices, I only have one, so I'm pretty sure it is correct).
Any ideas?
Well, isn't that always the case, I found an answer already. Turns out if I create a firebase reference to the property itself like this:
var propertyRef = this.myNestThermostat.child(name);
Then the following succeeds:
propertyRef.set(value, function (error) { ...
The firebase documentation was misleading on this because it led me to believe I could write this:
this.myNestThermostat.set("{'hvac_mode': 'off'}", function (error) { ... }
which technically should have worked, but I guess that would mean I'd need write access on the whole of this.myNestThermostat, which I don't. Tricky.
Anyway I'm happy because it works now, yay! Firebase + nodejs rocks!
I am relatively new to IoC containers so I apologize in advance for my ignorance.
My application is a asp.net 4.0 MVC app that uses the Entity Framework with a Repository layer on top of that. It is a multi tenant application so the connection string that is used varies by the logged in client.
The connection string is determined by a 'key' that gets passed in as part of the route which indicates the client. This route data is only present on the first request of the user's session.
The route looks kind of like this: http://{host}/login/dev/
where 'dev' indicates we are using the dev database.
Currently the IoC container is registering all dependencies in the global.asax Application_Start event handler and I have the 'key' hardcoded as follows:
var cnString = CommonServices.GetDBConnection("dev");
container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(
new InjectionConstructor(cnString));
Is there a way with Unity to dynamically register the repository based on the logged in client using the route data that is supplied initially?
Note: I am not manually resolving the repositories. They are getting constructed by the container when the controllers get instantiated.
I am stumped.
Thanks!
Quick assumption, you can use the host to identify your tenant.
the following article has a slightly different approach http://www.agileatwork.com/bolt-on-multi-tenancy-in-asp-net-mvc-with-unity-and-nhibernate-part-ii-commingled-data/, its using NH, but it is usable.
based on the above this hacked code may work (not tried/complied the following, not much of a unity user, more of a windsor person :) )
Container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(new InjectionFactory(c =>
{
//the following you can get via a static class
//HttpContext.Current.Request.Url.Host, if i remember correctly
var context = c.Resolve<HttpContextBase>();
var host = context.Request.Headers["Host"] ?? context.Request.Url.Host;
var connStr = CommonServices.GetDBConnection("dev_" + host); //assumed
return new RequestMgmtRecipientRepository(connStr);
}));
Scenario 2 (i do not think this was the case)
if the client identifies the Tenant (not the host, ie http: //host1), this suggests you would already need access to a database to access the client information? in this case the database which holds client information, will also need to have enough information to identify the tenant.
the issue with senario 2 will arise around anon uses, which tenant is being accessed.
assuming senario 2, then the InjectionFactory should still work.
hope this helps