I am trying to run Vorto dashboard on Raspberry Pi to visualize my Bosch IoT "things" data.
In order to run the Vorto Dashboard, I installed npm and nodejs and created the config.json file.
I am getting the below error whenever I try to run the dashboard using the command: sudo vorto-dashboard config.json, knowing that I already added the OAuth2 Client credentials.
No credentials given, can not get things
Could not get the token with given credentials. - StatusCodeError: 400 -
{"error":"unauthorized_client","error_description":"INVALID_CREDENTIALS:
Invalid client credentials"}
I am currently contributing the Vorto Project as an Intern at Bosch. Due to changes in the Vorto-Dashboard we combined and merged the functionality of a previous dashboard with another coexisting updated UI, providing advanced ways to visualize the existing devices.
As the uploaded state was work in progress, we temporarily disabled the config.json methodology and removed existing references from the documentation. Apparently, the reference in the tutorial you found was omitted, sorry for that!
Today, I deployed a new version 0.5.0 of the vorto-dashboard which should work as usual. You are now able to work with either process.env.[...] varibales or a config.json file. Thank you Mena for the quick response!
Feel free to let me know if you need any further help or have additional feedback.
TL;DR
To resolve your issue, store your OAUth credentials as environmental variables.
E.g. in debian et al., export BOSCH_CLIENT_ID=... etc., then start the dashboard in the same terminal.
Context
I was about to ask the same question, as I got the same error message no matter how I referenced the config.json file (relative path, absolute path, no reference, etc.).
For clarification, the tutorial pointing to a config.json resource for storing OAuth credentials is here.
Quoting:
While the dependencies are being installed, create the config.json file and insert client_id, secret and scope from your Already created
OAuth2 Client. The content of the file has to look like this:
{
"client_id": "<YOUR_CLIENT_ID>",
"client_secret": "<YOUR_CLIENT_SECRET",
"scope": "<YOUR_SCOPE>",
"intervalMS": 10000
}
The reference to the config.json file has been removed from the README.md resource in the vorto-dashboard module of vorto-examples.
The latest README.md suggests providing the OAuth credentials through environmental variables:
You can provide your OAuth2 credentials through environment variables.
The three environment variables you have to provide are:
BOSCH_CLIENT_ID
BOSCH_CLIENT_SECRET
BOSCH_SCOPE
[...]
Looking at the source, I can only find an explicit reference to a config.json in the start script entry for package_for_deployment.json (nor anything around the source seems to be consuming, say, argv[2] for that matter).
The AuthToken.js resource in charge of handling OAuth credentials only seems to reference environmental variables through the process.env.[...] references.
Elaboration
This is only speculation at the time of writing, but I suspect the reason why the config.json methodology has been abandoned might have something to do with strengthening security, i.e. not storing OAuth credentials permanently in a file.
If that much is true, then the tutorial page should probably be amended with the latest instructions from the README.md.
Related
I am trying to work with the BIM 360 Issue Editor created by Petr and available on github https://github.com/petrbroz/bim360-issue-editor/tree/develop
I have added all the dependencies,etc. but seem to be stuck with the configuration.
I am testing on local host and I am getting invalid URI error, what would be the correct configuration variables for launch.json file for
"HOST_URL": "http://localhost:3000","SERVER_SESSION_SECRET","CLI_CONFIG_PASSWORD"
Also there is SENDGRID_API_KEY required, which throws an error on the console, I add the key from SendGrid in config.js, and the error goes away. Is it correct?
Please suggest. Thanks
Here's more details about the env. variables:
HOST_URL is just the host/port the app is listening on (for example, http://localhost:3000)
This value is used to built the callback URL for the 3-legged OAuth workflow; for example, if the host URL is http://localhost:3000, the callback URL will be http://localhost:3000/auth/callback
Note that the same callback URL must be configured for your Forge app on https://forge.autodesk.com/myapps
SERVER_SESSION_SECRET is an arbitrary string that will be used to encrypt/decrypt browser cookies
CLI_CONFIG_PASSWORD is only needed when you want to use the command-line utility that's part of the sample code; in that case the configuration for the CLI utility will be zipped in a password-protected *.zip file using this env. variable as the password
SENDGRID_API_KEY is also optional and only needed if you want the app to send email notifications to users who triggered the Excel export
For context, I'm in the process of updating a Rails app to 5.2 and then to 6.0.
I'm updating my credentials to use the config/credentials.yml.enc and config/master.key defaults with Rails 5.2+ apps.
The Rails docs state:
In test and development applications get a secret_key_base derived from the app name. Other environments must use a random key present in config/credentials.yml.enc
(emphasis added)
This leads me to think that in production the SECRET_KEY_BASE value is required to be read from Rails.application.credentials.secret_key_base via config/credentials.yml.enc. In test and development environments, the secret_base_key is essentially "irrelevant", since it's calculated from the app name.
However, when I was looking at the Rails source code, it reads:
def key
read_env_key || read_key_file || handle_missing_key
end
That seems to say the order of reading values is:
ENV["SECRET_BASE_KEY"]
Rails.application.credentials.secret_base_key
Raise error
I use Heroku for my hosting, and have a ENV["SECRET_BASE_KEY"] env variable that stores this secret value.
Questions
If I have both ENV["SECRET_BASE_KEY"] and Rails.application.credentials.secret_base_key set, which one takes priority?
Is using the ENV var going to be deprecated at some point?
I have lots of environment-specific ENV variables because I don't want to use my production accounts in development for AWS S3 buckets, stripe accounts, etc. The flat-file format of credentials.yml.enc seems to assume developers only need to access these 3rd-party APIs in production. Is there an accepted format to handle environment-specific credentials yet in Rails?
I read through the comment threads on DHH's original PR as well as a linked PR that says it implements environment-specific credentials, but the docs don't mention this implementation so I'm not certain if it's the standard or if it's going to go away sometime soon.
I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.
YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.
The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}
I'm trying out gun.js I have it installed as a node.js project, I have configured the amazon S3 bucket through the dotenv and I have tried adding a data.json file and still I cant get gun.js to save the file locally or to he S3 bucket.
I know its early days for gun, but I get the feeling I'm missing something obvious.
I'm expecting to find a .json file in he local file system and or in the S3 bucket but I get neither.
require('dotenv').config();
var Gun = require('gun');
var gun = Gun({
file: 'data.json', // local testing and development
s3: {
key: process.env.AWS_KEY, // AWS Access Key
secret: process.env.AWS_SECRET, // AWS Secret Token
bucket: process.env.AWS_BUCKET // The bucket you want to save into
}
});
gun.put({ hello: 'world' }).key('my/first/data');
#bill Just noticed this now, sorry for the late answer. Thanks to #paul-w for notifying me of this and his response earlier today.
This question and answer assumes you are running a version EARLIER than v0.4.x!
If you are in NodeJS and are getting the error “You have no persistence layer to save to”, it means the default storage drivers (S3, file.js) didn't get installed or were deactivated - which is unusual as this happens automatically.
Try installing gun (again?) via npm install gun in your local NodeJS project directory, not a git clone or a copy&paste.
I can only guess, given the context you explain, that you might have copied/moved gun (like the gun.js file) into your project. The browser will work with just the single file, but NodeJS needs more - it needs the S3/file.js modules, which will be included if installed with npm or properly git cloned.
Also unlikely (since your code doesn't show this), if you happen to (this is bad) Gun({wire: {put: null, get: null}}) (or something similar) it would intentionally break the persistence drivers.
If you are in the browser and getting the error (and assuming your not overwriting the persistence drivers like in the previous paragraph), it could be because of some weird situation like you are using an old version of IE or a browser that doesn't have JSON support. Again, all these things are unlikely but I'm just wanting to be comprehensive.
Note: The above applies to the question in your title. However your actual question doesn't ask about the error, it asks about not seeing data in data.json or in S3. Answering that below.
To which #paul-w is more on track. If you are using S3 then the file.js module (data.json) automatically deactivates itself. If you are using the file.js module (data.json) then S3 does not get activated. As #paul-w mentioned, v0.4.x will support easily having multiple storage engines simultaneously. However, you should see your data in at least one or the other - unless you are getting the "no persistence layer" error, in which case you won't see your data anywhere because there isn't any persistence! But again, default persistence layers are included with gun by default (unless installation was incorrect, or you explicitly overwrite them - both unusual things).
I hope this answers your question. Sorry I didn't see it till now. Please let me know if this works, and also join the conversation at https://gitter.im/amark/gun . Thank you for helping start the stackoverflow questions! We need more of these!
I think Mark is going to answer this more officially, but the quick answer is that in gun.js 0.3 (current) there is a single gun server peer or storage target, and when you run gun as a server (e.g. from node.js rather than a browser), S3 is preferred, if S3 credentials are specified. But gun is also saving your data changes in browser memory, or localStorage (up to the browser limit of 5MB), and S3 is there for a more permanent storage.
So in the example above, I think the problem is that the file entry will only be used if there is a problem saving changes to S3, and that's why you don't see the new data going there. Maybe try putting an error in the S3 credentials (e.g. add an 'x' for now) and see if it starts using the file path instead.
In gun.js 0.4 there are plans to make use of all peers specified in the constructor or dynamically, but that feature isn't here yet.
(And I probably butchered that answer, but hopefully Mark can correct any inaccuracies in this. I'm new to gun.js but had the same question.)
To play with the demo-sessions app, I've got the oauth2 app mounted on /oauth2 as required.
In arangodb/Foxx doc, the oauth2 endpoints seems to be defined as strings (i.e https://github.com/arangodb-foxx/util-oauth2 )
But when I perform that with correct urls, and try to play with oauth, I've got an error :
...\oauth2\APP\manifest.json\": attribute child \"authEndpoint\" fails because [\"authEndpoint\" must be an object] (was \"[object Object]\").]","...
Oauth endpoints definitions are expected to be objects, not strings.
So what is the correct configuration for Foxx oauth2 ?
Thanks for help,
I can't reproduce your problem but the OAuth2 app has been updated for ArangoDB 2.7. You can still install odler versions of the OAuth2 app from the "install from GitHub" dialog, though.
I understand my mistake. In the code of the oauth2 2.0 release, the manifest just references the export.js file. In the previous release (1.2), a providers.js file was supplied and referenced in the manifest. Then in this previous release, it was possible to use different providers (what I want) as described in the 1.2 setup.js.
var providers = db._collection(providersName);
I just fetch the files providers.js, and setup.js from 1.2 github tag and configure them for my configuration, and that's ok.