GCloud Vision API Permission Denied on Second Request - node.js

I've gone through all the setup steps to make calls to the Google Vision API from a Node.js App. Link to the guide: https://cloud.google.com/vision/docs/libraries#setting_up_authentication
I'm using the ImageAnnotatorClient from the #google-cloud/vision package to make some text detections.
At first, it looked like everything was set up correctly but I don't know why it only allows me to do one request.
Further requests will give me the following error:
Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the vision.googleapis.com. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/
If I restart the Node app it again allows me to do one request to the Vision API but then the subsequent requests keep failing.
Here's my code which is almost the same as in the examples:
const vision = require('#google-cloud/vision');
// Creates a client
const client = new vision.ImageAnnotatorClient();
const detectText = async (imgPath) => {
// console.log(imgPath);
const [result] = await client.textDetection(imgPath);
const detections = result.textAnnotations;
return detections;
}
It is worth to mention that this works every time when I run the Node app in my local machine. The problem is happening on my Ubuntu Droplet from Digital Ocean.
Again, I set everything up as it is in the guides. Created a Service Account, downloaded the Service Account Key JSON file, set up the environment variable like this:
export GOOGLE_APPLICATION_CREDENTIALS="PATH-TO-JSON-FILE"
I'm also setting the environment variable in the .bashrc file.
What could I be missing? Before setting up everything from scratch and go through the whole process again I thought it would be good to ask for some help.

So I found the problem. In my case, it was a problem with PM2 not passing the system env variables to the Node app.
So I had everything set up correctly auth-wise but the Node app wasn't seeing the GOOGLE_APPLICATION_CREDENTIALS env var.
I deleted the PM2 process, created a new one and now it works.

Related

How to use node package dotenv to access local development environment variables in Red Hat OpenShift application?

I'm revisiting a project which hasn't been updated for a while.
In production/online environment, it uses environment variables defined at:
openshift online console > applications > deployments > my node app > environment
In development/offline environment, it uses environment variables defined at:
./src/js/my_modules/local_settings (this file is ignored by .gitignore)
The code looks something like:
// check which environment we are in
if (process.env.MONGODB_USER) {
var online_status = "online";
}
else {
var online_status = "offline";
}
// if online, use environment variables defined in red hat openshift
if (online_status === 'online') {
var site_title = process.env.SITE_TITLE;
var site_description = process.env.SITE_DESCRIPTION;
//etc
}
// if offline, get settings from a local file
else if (online_status === 'offline') {
var local_settings = require('./src/js/my_modules/local_settings');
var site_title = local_settings.SITE_TITLE;
var site_description = local_settings.SITE_DESCRIPTION;
// etc
}
I would like to install the dotenv package in my local project repo via:
npm install dotenv
So that I can:
Have my local settings in a .env file in the root of my project (ignored in .gitignore)
Be able to use process.env.SOME_VARIABLE rather than local_settings.SOME_VARIABLE
Get rid of some if/else blocks as both scenarios would point to process.env.SOME_VARIABLE
I'm a bit confused as to how this would effect the online environment.
Seeing as both production/online and development/offline environments would use:
var some_variable = process.env.SOME_VARIABLE_HERE
would the application automatically know to:
Look at the local .env file when in development?
Look at the Red Hat environment variables when in production?
And would adding the required instantiation at the beginning of the server-side file:
require('dotenv').config()
somehow make Red Hat OpenShift freak out (as it seems to already have its own 'things' in place to resolve references to process.env.SOME_VARIABLE_HERE to the relevant values defined in the OpenShift console)?
To have a file by any environment (.dev .staging .prod) into the source code repository or manually in the server (it those are in .gitignore) worked for long time, but now it goes against to the devops.
The clean way is to use environment variables but managed remotely and obtained at the start of your application.
How it works?
Basically your apps don't read or need a file (.env .properties, etc) with variables anymore. It loads them from a remote http service.
Not intrusive
In this approach, you don't need specific languages variables (nodejs in your case). You just need to prepare your app to use environment variables. Your application don't care where the variables come from, just needs to be available at operative system level.
To achieve that, you just need to download the variables using a simple shell code or a very basic algorithm (http invocation) in your favorite language.
After that, after the start of your app, variables are ready to use at the most basic level.
var site_title = process.env.SITE_TITLE;
This approach is not intrusive because your app don't need something complex like library or algorithm in some programing language. Just needs the environment variables.
Intrusive
Same as previous alternative but instead to read the variables direct from environment system, you should use or create a class/module in your language. This offer your the variables you need:
var site_title = VariablesManager.getProperty("SITE_TITLE");
VariablesManager at the startup must have consumed the variables from a remote service (http) and the store them to offer them to whoever needs it through getProperty method.
Also this VariablesManager usually has a feature called hot-reload which at intervals, update the variables consuming the remote variables manager. With this, if your application is running in production with real users and some variable needs to be updated, you just need to change it in the variables manager. Automatically your app will load the new values, without restart or touching your app
This approach is intrusive because you need to load advanced libraries in some programing language or create it.
Devops
Your application just needs a few properties or settings related to the consume of remote variables. For example: variables of acme-web-staging:
remote_variables_manager = https://variables.com/api
application_id = acme-web-staging
secure_key = *****
You could hide the secure key and parametrize the application_id using environment variables (created in the platform console)
remote_variables_manager = https://variables.com/api
application_id = ${application_id}
secure_key = ${remote_variables_manager_key}
Or if you want one variable manager by each environment
staging
remote_variables_manager = https://variables-staging.com/api
application_id = acme-web
secure_key = *****
production
remote_variables_manager = https://variables-staging.com/api
application_id = acme-web
secure_key = *****
Variables manager
This concept was introduced many years ago. I used with java. It consist in a web application with features like:
secure login
create applications
create variables of an application
crypt sensitive values
publish http endpoints to download or query the variables by application
Here a list of some ready to use alternatives:
Configurator
Nodejs & mysql solution. I developed this and I use it in various projects.
Doppler
zookeeper
http://www.therore.net/java/2015/05/03/distributed-configuration-with-zookeeper-curator-and-spring-cloud-config.html
Spring Cloud
https://www.baeldung.com/spring-cloud-configuration
This is a java spring framework functionality in which you can create properties file with configurations and configure your applications to read them.
Consul
Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality.
doozerd, etcd
In your specific case
Don't use dot-env
Use pure process.env.foo
Deploy a remote variables manager in your openshift infraestructure
Create just one variable in your openshift web console: APP_ENVIRONMENT
In your code at the start, do something like this:
if (process.env.APP_ENVIRONMENT === "PROD")
//get variables from remote service using
//some http client like axios, request, etc
//then inject them to your process.env
process.env.site_url = remoteVariables.site_url
else
//we are in local developer workspace
//so, nothing complex is required
//developer should inject manually
//before the startup: npm run start or dev
//export site_url = "acme.com"
If you can configure an execution of a shell script before the start of your openshift app, you could load and expose the variables at that stage and the previous snippet would not be necessary because the variables will be ready to be retrieved using process.env directly in your app

Blazor WASM Azure Static Web App, Functions not working

I created a simple Blazor WASM webapp using C# .NET5. It connects to some Functions which in turn get some data from a SQL Server database.
I followed the tutorial of BlazorTrain: https://www.youtube.com/watch?v=5QctDo9MWps
Locally using Azurite to emulate the Azure stuff it all works fine.
But after deployment using GitHub Action the webapp starts but then it needs to get some data using the Functions and that fails. Running the Function in Postman results in a 503: Function host is not running.
I'm not sure what I need to configure more. I can't find the logging from Functions. I use the injected ILog, but can find the log messages in Azure Portal.
In Azure portal I see my 3 GET functions, but no option to test or see the logging.
With the help of #Aravid I found my problem.
Because I locally needed to tell my client the URL of the API I added a configuration in Client\wwwroot\appsettings.Development.json.
Of course this file doesn't get deployed.
After changing my code in Program.cs to:
var apiAddress = builder.Configuration["ApiAddress"] ?? $"{builder.HostEnvironment.BaseAddress}/api/";
builder.Services.AddHttpClient("Api",(options) => {
options.BaseAddress = new Uri(apiAddress);
});
My client works again.
I also added my SqlServer connection string in the Application Settings of my Static Web App and the functions are working as well.
I hope somebody else will benefit from this. Took me several hours to figure it out ;)

How do I get my deployed React.js app to use API secret keys? I'm using Heroku confing vars but still not working

Problem:
I created a simple React app (using Javascript and Node) that uses the GitHub API to search for users and return information about them. I need to use a GitHub oauth key so that I can make authenticated API requests. However, I am having trouble giving my deployed app (using Heroku) the key without hard-coding it into the API call. I'm fairly new at this so any help would be great! I linked the github repo at the bottom of this post.
I have tried several things which I will explain below:
Attempt 1:
I created a file where I set my GitHub key to a variable and exported it (Image of code)
I put said file in the .gitignore
I imported the variable in the files where I made API calls and used them it directly in the API call. (Image of API call)
This worked on my dev environment but (obviously) did not work on my deployed Heroku app because it had no idea what the variable was. (Image of error)
Attempt 2:
I configured variables in Heroku and set GITHUB_KEY to my key. (Image of Heroku variable setting).
Next, I checked that Heroku recognized this variable by running the command heroku config:get GITHUB_KEY and received the correct key in response (Image of terminal)
In my secrets file, I set the variable like so: process.env.GITHUB_KEY = 'a93b2c21918b42df5a28e0e529c627ee22c60de4'; (Image of setting variable using process.env)
And then I use it in my API calls on the frontend: const res = await fetch(
'https://api.github.com/users/${this.state.input}?access_token=${process.env.GITHUB_KEY}'
);.
However, I get the following error: SearchBar.js:32 GET https://api.github.com/users/livmarx?access_token=undefined 401 (Unauthorized). (Image of error).
So, I know that I'm misunderstanding how process.env works but cannot seem to figure it out! Any clarification would be super helpful.
Here is the link to my github repo: https://github.com/livmarx/zilliow-challenge

Deploying Github-Passport-Stategy Integrated App with Now

I'm new to application deployment, I have an Express application which uses Github's Passport strategy to authenticate users and saves them to a (remote) MongoDB database, when using localhost, my application works as expected.
I'm using the Zeit Now (OSS plan) CLI tool which was installed globally with NPM.
The issue
When I deploy my application using "now" inside the root of the project-folder and then goto "https://github.com/settings/applications/app" and swap the Homepage-URL and the "auth/github/callback" [callback] URL from "http://localhost:3000/auth/github/callback" with the URL generated by Now - so it becomes "https://app-name-pxwlglhegg.now.sh/auth/github/callback" I get redirect-uri-mismatch :
https://app-name-pxwlglhegg.now.sh/auth/github/callback?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch
I've tried several times and can't figure it out.
You change the callback on GitHub, but the same setting inside your app is not changed. So they mismatch.

What is the proper way to use the bluemix.getServiceCreds() function in node.js?

I have cloned the Concept Insights demo from Bluemix and made some minor changes to use my own corpus. It runs OK locally, but when I deploy it to Bluemix I get an authorization error when it tries to access my corpus. I'm certain that the error is a result of the early call in app.js to bluemix.getServiceCreds('concept_insights'), which apparently replaces my service credentials with some that must be stored in the environment on Bluemix.
Can someone explain the purpose of this function, and the proper approach to what I am trying to do? I could probably just delete the call to that function, but I'm afraid that I may be missing part of the larger picture if I do. Is this a way to keep my credentials out of the code base? If so, how do I make it work?
bluemix.getServiceCreds('concept_insights') gets the concept_insights service credentials from the VCAP_SERVICES variable that is created by Bluemix. (see VCAP_SERVICES)
You probably want to use the credentials from the environment instead of hardcoding them in your app.js file.
When your app runs locally you hardcode the credentials in app.js, but when it runs in Bluemix those credentials are overwritten. If you don't want this to happen remove the bluemix.getServiceCreds('concept_insights')
var credentials = {
url: 'https://gateway.watsonplatform.net/concept-insights/api',
username: '<username>',
password: '<password>',
version: 'v2'
};
When creating a service make sure you use the Standard plan.
If you use the Beta plan you will have to use https://gateway.watsonplatform.net/concept-insights/api as url.

Resources