Environment variables are the means by which the Cloud Foundry runtime communicates to the application about its environment. One of the most important pieces of information it communicates are the services which are available and how to connect with them.
Source
The same page gives a sample of environment variables containing connection parameters like user-name password for MySQL database.
VCAP_SERVICES: {
"mongodb-1.8":[{"name":"hello-mongo","label":"mongodb-1.8","plan":"free","credentials":{"hostname":"172.30.48.64","port":25003,"username":"e4f2c402-1153-4dfb-8d98-2f6efc65e441","password":"f17f81e4-9855-4b9c-a22b-e6a9e6f113c3","name":"mongodb-5751dac0-3b5e-405b-a1e1-2b384fe4026d","db":"db"}}],
"redis-2.2":[{"name":"hello-redis","label":"redis-2.2","plan":"free","credentials":{"node_id":"redis_node_4","hostname":"172.30.48.43","port":5002,"password":"e1d7acb0-2baf-42be-84bc-3365aa819586","name":"redis-96836b7c-0949-45fd-a741-c7be5951d52f"}}],
"mysql-5.1":[{"name":"hello-mysql","label":"mysql-5.1","plan":"free","credentials":{"node_id":"mysql_node_5","hostname":"172.30.48.24","port":3306,"password":"pw4EKJqL6na6f","name":"dd9b58515e3cb41958a30bf2af88126fc","user":"uLfJbOmxfSEUt"}}]
}
The page further states:
You can read this information into your application using Java's environment variable API and/or existing Spring XML features but it is easer to consume this information using the new cloud namespace (described here) which parses it out into a convenient Properties object.
Reading this I wondered what implications this setup have for application security. Specifically what measures should the developer take to keep malicious attackers from gaining direct control of backend services like mysql database?
EDIT: Apart from the risk of attacker gaining control of backend service, I also can imagine the risk of attacker causing the application to connect to a malicious backend.
If you want to connect to a backend (database) service, you must provide the application with credentials somehow. To be able to dynamically bind to services, environment variables are a good choice to pass application private information to to the application.
As with any application compromise, the backend gets exposed when the application is hacked.
The only way you can connect to a malicious backend is if the attacker can setup a malicious service on the Cloud Foundry infrastructure and is able to compromise the Cloud Controller to pass the application forged environment variables.
Related
It is well documented that Google Apps Script run on Google App Engine servers that would not have access to a company's internal network/server:
https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app
https://cloud.google.com/appengine/kb/#static-ip
https://developers.google.com/apps-script/guides/jdbc#using_jdbcgetconnectionurl
Per the documentation, if you want a Google Apps Script project to have access to an internal network/server then you will have to white-list Google's IPs. But we all know that isn't the safest option. In fact, the documentation even says so:
Note that using static IP address filtering is not considered a safe and effective means of protection. For example, an attacker could set up a malicious App Engine app which could share the same IP address range as your application. Instead, we suggest that you take a defense in depth approach using OAuth and Certs.
The issue is I cannot find any documentation, reference material, or articles on how best an organization should do what it suggests.
So my question is, how can an organization using G-Suite Enterprise securely allow Google Apps Script projects to access the company's internal network?
The documentation made it quite clear, that since App Scripts are ran on shared App Engine instances, it is impossible to restrict with IP, and that also implies the networking capability would be very limited (i.e. no VPC peering or alike). Therefore, as in the highlighted block, they suggest implementing authentication over just IP restriction.
Apart from authentication, App Script also supports encrypting and authenticating the server with SSL (sample code). This should protect the connection from being eavesdropped when sent over the Internet.
Further more, you can implement a "semi IP restriction" mechanism, technically called Port Knocking, which briefly works as follow:
First create a special endpoint, requires authentication, accepts an IP address as input. When requested, you open up your firewall to accept connection from that IP to your internal network for a limited time (e.g. 5min).
In your App Script, use URL Fetch to request that endpoint, so that your scripts instance is temporarily allowed to access your network.
Of course that will not be perfect, since one App Engine instance runs many scripts concurrently and the whitelist is opened for a set time, but still this is considerably better than persistently opening the port to all Google (App Engine) IPs.
Apps Script is a great tool for simplifying tasks when you are using G Suite services, unfortunately, what you are trying to achieve is not available. Also, keep in mind Apps Script is not built on App Engine, it's a completely different product.
Therefore if what it is shown in the documentation can't fulfill the requirements you have, please check other Google alternatives like App Engine or Google Cloud Platform, instead of G Suite.
Use Case :: Trying to mock a destination that would help us connect to a cloud S4 system behind an IDP and requires Oauth2 authentication.
I have been able to mock a local destination to connect to system behind basic authentication.
We are trying to understand how to mock the additional components susch as XSUAA service that would require us to generate the token
We want to use this destination to enable us to connect to remote systems locally without modifying the code developed for the cloud enviornment.
To basically recap the discussion in the comments:
It is not easily possible to consume Cloud Foundry services locally. The SDK reads many of the necessary configuration and credentials from the VCAP_SERVICES to communicate with those services. The only option, which is not recommended, is to copy this down locally. However, this poses security risks, as the environment variable contains sensitive information.
If the only reason is easier debugging of your application you could have a look at this answer to see how remote debugging can be set up.
To who it may concern,
I am looking to move more of our applications that the company uses to azure. I have found that Remote App will allow people to us the apps I have allowed via the Remote App. The application which will be used is linked to a database which is on site, I am just worried about people being able to access this database as it will contain important data which cant be leaked. I am trying to work out what are some security precautions which could be taken to prevent the data from being viewed by the wrong people. I have seen app locker to stop applications on the virtual machine from being accessed. Any other security suggestions would be greatly appreciated.
You should be fine. Remote app is running remotely - meaning that theres no way of getting to the connection string (reverse engineering). Access to the app is also ensured by AAD login. The database should be protected as well with AD credentials. Also, adding a service tier that fronts the database would provide a facade.
I've recently migrated an application from heroku to amazon-ec2 because of recomendations from a security consultant. Yet, he didn't know deeply heroku and the doubt remained.
Can access to a Heroku PostgreSQL DB be restricted for it to be accessed only by the application?
Would you recommend Heroku for security critical applications?
This is a deceptively complex question because the idea of "restricted so that it can be accessed only by the application" is ill-defined. If your ultimate goal is simply to keep your data as secure as possible, then Heroku vs. AWS vs. physical servers under lock and key involves some cost-benefit analysis that goes beyond just how your database can be accessed.
Heroku limits database access via authentication. You share a secret (username/password) between the database and the application. Anyone who has that secret can access the database. To facilitate keeping the secret secret, all database access is or should be over SSL.
There are other ways to restrict access in addition to authentication, but many of them are incompatible with a cloud-based approach. Also, many of them require you to take much more control over the environment of your servers, and the more responsibility you have on that front, the bigger the chance that issues totally separate from who can hit the postgres port on your database will sink you.
The advantage in using AWS directly instead of through a paas provider like Heroku is that you can configure everything yourself. The disadvantage is that you have to configure everything yourself. I would recommend you use AWS over a managed service only if you have a team of qualified and attentive sysadmins to configure, monitor and update your environment. Read Heroku's security policy page. Are you doing at least that much to protect your servers in your own configuration on AWS? If not, then you might have bigger problems than how many layers of redundancy are in place around your database.
I'm currently performing a research on cloud computing. I do this for a company that works with highly private data, and so I'm thinking of this scenario:
A hybrid cloud where the database is still in-house. The application itself could be in the cloud because once a month it can get really busy, so there's definitely some scaling profit to gain. I wonder how security for this would exactly work.
A customer would visit the website (which would be in the cloud) through a secure connection. This means that the data will be passed forward to the cloud website encrypted. From there the data must eventually go to the database but... how is that possible?
Because the database server in-house doesn't know how to handle the already encrypted data (I think?). The database server in-house is not a part of the certificate that has been set up with the customer and the web application. Am I right or am I overseeing something? I'm not an expert on certificates and encryption.
Also, another question: If this could work out, and the data would be encrypted all the time, is it safe to put this in a public cloud environment? or should still a private cloud be used?
Thanks a lot!! in advance!!
Kind regards,
Rens
The secure connection between the application server and the database server should be fully transparent from the applications point of view. A VPN connection can connect the cloud instance that your application is running on with the onsite database, allowing an administrator to simply define a datasource using the database server's ip address.
Of course this does create a security issue when the cloud instance gets compromised.
Both systems can live separately and communicate with each other through a message bus. The web site can publish events for the internal system (or any party) to pick up and the internal system can publish events as well that the web site can process.
This way the web site doesn't need access to the internal database and the internal application doesn't have to share more information than is strictly necessary.
By publishing those events on a transactional message queue (such as MSMQ) you can make sure messages are never lost and you can configure transport level security and message level security to ensure that others aren’t tampering messages.
The internal database will not get compromised once a secured connection is established with the static Mac ID of the user accessing the database. The administrator can provides access to a Mac id through one time approval and add the user to his windows console.