Can Azure Functions written in Node.js access Connection Strings? - node.js

The App Settings for an Azure Function App contain values for database connection strings that can be set in the portal. In C# they can be accessed using
ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString
For an Azure Function written in JavaScript, is there an equivalent construction that has access to the connection strings? I understand that they can be stored in the application settings, but since there is a section on the portal devoted to connection strings, I am asking if this has any application to Node.js functions.

The distinction between app settings and connection strings makes sense in .NET, but not as much in Node. When using Node, the suggestion is to use app settings for all your secrets and connection strings. You can them access them using process.env.YourAppSetting.
And to answer your question directly, there is no easy way to access connection strings in Node, unless you start making assumptions on prefixes that are not guaranteed to work forever.

try
process.env['YOUR_APP_SETTING'];
Microsoft Documentation

I ran into same issue and I was under the same impression that we cannot access connection strings in NodeJs application through Runtime Environment variable.
But after thorough checking in the Microsoft documentation, got to know that we can access the connection strings in Node JS, PHP, etc., which are of custom type as below
CUSTOMCONNSTR_<connection_string>
Ref:
https://learn.microsoft.com/en-us/azure/app-service/web-sites-configure#connection-strings

Related

Is the ElasticSearch standard Node client safe for use with cloud functions?

I'm contacting an ElasticSearch node in some of my Typescript cloud functions on GCP. So far I've been creating my own HTTP requests. However, as the scope of the project grows, I'd like to use the official '#elastic/elasticsearch' package for convenience, especially when it comes to type checking. I am aware that you should not keep any resources open when a cloud function ends, but I've seen in the official documentation of the client that it keeps connections alive. Is there any way to disable this behaviour? Am I misunderstanding the meaning of some of this? I find the API documentation a bit opaque, and would really appreciate some help. Thanks!
I am aware that you should not keep any resources open when a cloud function ends
Actually, that's not a requirement. You can certainly keep a connection open. The Firebase Admin SDK does this, as well as other Google Cloud SDKs. It just shouldn't do anything between function invocations. The connection will be kept alive for as long as the server instance is alive, which is a good optimization.
What you shouldn't do is leak resources that aren't going to be reused, as they could cause your function to run out of memory and crash eventually.

Right way to store sensitive credentials for web app

I have a Java web app running on EC2 under Tomcat (a WAR) that requires various sensitive configuration parameters - for example, the credentials associated with various other AWS services. I had been setting these as environment variables, but then discovered that running Tomcat as a service removes almost all environment variables. So currently I use a simple configuration file to store these values.
I don't believe this is a wise choice going forward, however, and would like to find an alternative. What is the right way to handle this kind of sensitive information?
IAM Roles are going to be your best friend here. The official docs here will point you in the right direction. There's also a post on the AWS security blog about it here.

accessing updates from database to my application

I would like to know, how to get data from MySQL database to my application without using any REST API or PHP code. I was looking over the internet for the solution for this problem. But they say, you can use php code as REST API and then, can communicate with database. For this purpose, i will need a host and domain. I don't want to use that. Is there any other way to communicate with mysql database. Can i use mysql module of node js in titanium application.
There is no way to have direct connection between your mobile client and MySQL database. To retrieve data from MySQL you need to build application which will receive request from your app, retrieve data from MySQL, process and return it as a response.
If you don't want to build mobile and server application at the same time you can try using Appcelerator Cloud service, which plays really nicely with Titanium SDK and allows you to persist users data.
There are two answers to this problem, depending on your situation:
If Your Data Is Specific to One Device...
If you want to store data locally on one device, and that one device is the only one that will ever use it, then you want to use a SQLite database. This is very commonly used in mobile apps, and is very well documented. If you already have a MySQL database with the schema you want to use, then you could really easily convert it to a SQLite db file.
If Your Data Is Centralized...
If you need to store data remotely, in one central place, that the mobile app can access, then you need to use a remote database.
MySQL is one such option. You say that hosting PHP (which is itself run through something like Apache or IIS) is not something you want to do. But if you can host MySQL somewhere, or run it on a machine that your mobile app can access, then you can also easily host PHP and Apache.
If you don't want to spend money on a domain, then use one of the free dynamic DNS providers, which map a domain name (such as foo.hopto.org) to an IP address. If you don't want to pay for a server, then use your home computer, and keep it on whenever the mobile app needs to access it. There's easy, well documented ways around any of the issues you're having.
Alternatively, as #daniula pointed out, use Appcelerator Cloud Services. Then you can interact with simple objects, and they'll be stored for you in a central server. You can control who can access what data, and more. (Full disclosure -- I work for Appcelerator.)

GAE: best practices for storing secret keys?

Are there any non-terrible ways of storing secret keys for Google App Engine? Or, at least, less terrible than checking them into source control?
In the meantime, Google added a Key Management Service: https://cloud.google.com/kms/
You could use it to encrypt your secrets before storing them in a database, or store them in source control encrypted. Only people with both 'decrypt' access to KMS and to your secrets would be able to use them.
The fact remains that people who can deploy code will always be able to get to your secrets (assuming your GAE app needs to be able to use the secrets), but there's no way around that as far as I can think of.
Not exactly an answer:
If you keep keys in the model, anyone who can deploy can read the keys from the model, and deploy again to cover their tracks. While Google lets you download code (unless you disable this feature), I think it only keeps the latest copy of each numbered version.
If you keep keys in a not-checked-in config file and disable code downloads, then only people with the keys can successfully deploy, but nobody can read the keys without sneaking a backdoor into the deployment (potentially not that difficult).
At the end of the day, anyone who can deploy can get at the keys, so the question is whether you think the risk is minimized by storing keys in the datastore (which you might make backups of, for example) or on deployer's machines.
A viable alternative might be to combine the two: Store encrypted API keys in the datastore and put the master key in a config file. This has some potentially nice features:
Attackers need both access to a copy of the datastore and a copy of the config file (and presumably developers don't make backups of the datastore on a laptop and lose it on the train).
By specifying two keys in the config file, you can do key-rollover (so attackers need a datastore/config of similar age).
With asymmetric crypto, you can make it possible for developers to add an API key to the datastore without needing to read the others.
Of course, then you're uploading crypto to Google's servers, which may or may not count as "exporting" crypto with the usual legal issues (e.g. what if Google sets up an Asia-Pacific data centre?).
There's no easy solution here. Checking keys into the repository is bad both because it checks in irrelevant configuration details and because it potentially exposes sensitive data. I generally create a configuration model for this, with exactly one entity, and set the relevant configuration options and keys on it after the first deployment (or whenever they change).
Alternately, you can check in a sample configuration file, then exclude it from version control, and keep the actual keys locally. This requires some way to distribute the keys, though, and makes it impossible for a developer to deploy unless they have the production keys (and all to easy to accidentally deploy the sample configuration file over the live one).
Three ways I can think of:
Store it in DataStore (may be base64 encode to have one more level
of indirection)
Pass it as environment variables through command-line params during deployment.
Keep a configuration file, git-ignore it and read it from server. Here this file itself can be a .py file if you are using a python deployment, so no reading & storing of .json files.
NOTE: If you are taking the conf-file route, dont store this JSON in the static public folders !
If you are using Laravel and want to store your keys in Datastore - this package can make that easy while managing performance using caching. https://github.com/tommerrett/laravel-GAE-secret-manager
Google app engine by default create credential for app engine and inject it in side the environment.
Google Cloud client libraries use a strategy called Application Default Credentials (ADC) to find your application's credentials. When your code uses a client library, the strategy checks for your credentials in the following order:
First, ADC checks to see if the environment variable GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC uses the service account file that the variable points to.
If the environment variable isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, Cloud Run, App Engine, and Cloud Functions provide, for applications that run on those services.
If ADC can't use either of the above credentials, an error occurs.
So point 2 means if you grant the permissions to your service account using IAM Admin you do not have to worry about the passing json keys it will aromatically works.
eg.
Suppose your application running in App Engine Standard and it wants the access to the Google Cloud Storage. To do this you do not have to create new service account just grant the access to the ADC.
REF https://cloud.google.com/docs/authentication/production#finding_credentials_automatically

Like-for-Like SimpleDB Offline

We are currently using Amazon's SimpleDB for a web service. The data is very simple and doesn't require anything like SQL. Its basically a 'property bag'.
We are due to demo our project somewhere where we will not definitely have Internet access and thus may not be able to access SimpleDB. This has only just become apparent and I have been asked to look for service that we can run on a local server that would provide us with a like-for-like (i.e. calls to SimpleDB would work the same on this service) so that we could just direct our code to this rather than the real AWS SimpleDB service without any code change.
Is anyone else doing something similar? What are you using?
We also use Azure, so rather than change our app to work with one service online and another offline, we may change it to only use Azure as this can be run offline and still work.
Windows Azure table storage does not really work offline per se. The storage emulator can be run without an internent connection. However, it is an emulator. So, it does not have 100% fidelity with the cloud service and it is not tuned for any type of performance comparison. You could use this for demos, but I would not suggest using the emulator for any type of 'real' work. Crazy thing about cloud services... they don't work very well offline. ;)
You could maybe use a local version of redis - http://redis.io/ - but this definitely would require some recoding - not like-for-like calls
If the application was written to be testable (meaning you are using something like the repository pattern) you could possibly stub the calls and point to either a very lite Db or a file.
As a reference for anyone who ends up here looking for the same...
We eventually used mdb/node.js which uses the same api calls as SimpleDB. All we had to do was point our app at a new Service Endpoint URL (our MDB Node.js server - which was handily a VMware application that we ran in VMware Player).
This worked perfectly, but thankfully we never actually needed it as we could access the real SimpleDB.
https://github.com/robtweed/node-mdb
http://gradvs1.mgateway.com/main/index.html?path=mdb
Neil

Resources