I'm currently performing a research on cloud computing. I do this for a company that works with highly private data, and so I'm thinking of this scenario:
A hybrid cloud where the database is still in-house. The application itself could be in the cloud because once a month it can get really busy, so there's definitely some scaling profit to gain. I wonder how security for this would exactly work.
A customer would visit the website (which would be in the cloud) through a secure connection. This means that the data will be passed forward to the cloud website encrypted. From there the data must eventually go to the database but... how is that possible?
Because the database server in-house doesn't know how to handle the already encrypted data (I think?). The database server in-house is not a part of the certificate that has been set up with the customer and the web application. Am I right or am I overseeing something? I'm not an expert on certificates and encryption.
Also, another question: If this could work out, and the data would be encrypted all the time, is it safe to put this in a public cloud environment? or should still a private cloud be used?
Thanks a lot!! in advance!!
Kind regards,
Rens
The secure connection between the application server and the database server should be fully transparent from the applications point of view. A VPN connection can connect the cloud instance that your application is running on with the onsite database, allowing an administrator to simply define a datasource using the database server's ip address.
Of course this does create a security issue when the cloud instance gets compromised.
Both systems can live separately and communicate with each other through a message bus. The web site can publish events for the internal system (or any party) to pick up and the internal system can publish events as well that the web site can process.
This way the web site doesn't need access to the internal database and the internal application doesn't have to share more information than is strictly necessary.
By publishing those events on a transactional message queue (such as MSMQ) you can make sure messages are never lost and you can configure transport level security and message level security to ensure that others aren’t tampering messages.
The internal database will not get compromised once a secured connection is established with the static Mac ID of the user accessing the database. The administrator can provides access to a Mac id through one time approval and add the user to his windows console.
Related
I just created my first Azure SQL Server and Database and now am trying to configure the firewall so that my web application can connect and make changes to the single database on the server.
I see that you can allow all clients to connect by allowing a rule such:
However is this bad practice? I see my Client IP address is listed in the Azure portal can I get clarification that I should allow just that single IP address access for now, and then later when I publish my web application to Azure I will be given an IP address of where that web app lives and only restrict access to that (assuming that people can only make database changes through my front end application).
Thanks
Yes, this is bad practice. There are other layers they'd have to get through (e.g., the server login), but this opens the front door and allows anyone to pick away at it at their leisure.
If you have a web server hosting your web application on some server (which you must be), whitelist that server's IP address (and perhaps your own, for development/admin purposes), but allowing all IPs is not considered good practice, no.
One particular case where you may really want to allow this is if you are distributing a desktop application to unknown clients that must connect to the backend. It becomes extremely enticing at that point, but even so, the recommended practice (or at least, my recommended practice) would be to utilize a web service that will accept an application registration upon startup of the app, register the client IP temporarily through that, and then have a background server (think WebJobs) that will flush the firewall rules say, every night or so (or for a more elaborate setup, accept the registration with a timeout and use a continuous WebJob to poll for the timeout and refresh/revoke as necessary)
To who it may concern,
I am looking to move more of our applications that the company uses to azure. I have found that Remote App will allow people to us the apps I have allowed via the Remote App. The application which will be used is linked to a database which is on site, I am just worried about people being able to access this database as it will contain important data which cant be leaked. I am trying to work out what are some security precautions which could be taken to prevent the data from being viewed by the wrong people. I have seen app locker to stop applications on the virtual machine from being accessed. Any other security suggestions would be greatly appreciated.
You should be fine. Remote app is running remotely - meaning that theres no way of getting to the connection string (reverse engineering). Access to the app is also ensured by AAD login. The database should be protected as well with AD credentials. Also, adding a service tier that fronts the database would provide a facade.
I have a mobile application that communicates with a REST based web-service. The web-service lives behind the firewall and talks to other systems. Currently this web-service requires a firewall port to be opened and a SSL cert generated for each installation. Mobile apps sends login credentials so web-services can login to custom back-end systems.
Recently a customer approached us asking how could we deploy this to 50 offices. As we don't want to say modify every firewall in every office, we're looking for options.. This is a list of possible solutions and my thoughts on each one:
Open firewall port and expose https webservice - This is our current
solution but we dont want to have to contact 50 network admins and explain why we need to do this.
VPN - Too heavy weight, complex and expensive, we only need access
to one server. Does not solve problem as firewall needs to be
modified.
Microsoft Azure Hybrid Connection Manager - This provides a managed
service where the Azure cloud will expose an end point. Azure will
also expect connections from a easy to install application that
lives behind the firewall. When a REST call is made to the cloud
end-point, the request is forward down socket that was initiated by
the software behind the firewall. This does what we want but as its
a Microsoft Solution there might impose other requirements that our
customers might not want. Currently the simple Hybrid Connection Manager is free. But for how long?
Jscape MFT Gateway - Similar to Azure but you can host their server anywhere. Not that expensive but is not opensource.
Netty - A async java library/toolkit where this type of application could easily be build. Client and server apps would need to be build and deployed. Dont know what we dont know about Netty.
MDM, AirWatch, BlackBerry BES - A MDM based solution would work expect that MDM's are centrally managed and are not often in every office where the backend services are located. Airwatch has an AppTunnle but im not sure about the specifics.
At this point the Microsoft and Jscape systems are possible solutions.
But most likely these solutions will require us to modify the mobile software to work around issues such as:
How does the user know which server to login to? A locator service
needs to be built such that, an email address is used to lookup their
office, or they need to select their office location from a list.
While the connection is SSL many company might want some additional protection since network login information will be send down the pipe.
How is load balancing and fail-over managed?
So, at this point i'm looking for more options. The best option would be a commercial product that offers some level of customization. Second, would like a well used open-source product that could be installed in Aws and customized.
Thanks
The best approach we found was to use the PUTTY API and setup a reverse proxy.
Is it secure to have data be sent to free database at mongolab from heroku app.
Data could be like emails, and preferences.
Or do you need ssl, i've read about mongodb ssl.
I've asked around but couldn't find anything specific to mongolab.
From MongoLab's documentation:
Securing communications to your database
You should always try to place your application infrastructure and
your database in the same local network (i.e., datacenter / cloud
region), as it will be the most secure method of deployment and will
minimize latency between your application and database.
When you connect to your MongoLab database from within the same
datacenter/region, you communicate over your cloud hosting provider’s
internal network. All of our cloud hosting providers provide a good
deal of network security infrastructure to isolate tenants. The
hypervisors used do not allow VMs to read network traffic addressed to
other VMs and so no other tenant can “sniff” your traffic.
However, when you connect to your MongoLab database from a different
datacenter/region, your communications are less secure. While your
database does require username / password authentication (with
credentials that are always encrypted on the network), the rest of
your data is transmitted unencrypted over the open internet. As such
you are potentially vulnerable to others “sniffing” your traffic.
Using MongoDB with SSL connections
Available for Dedicated plans running MongoDB 2.6+ only
To further secure communications to your database, MongoLab offers
SSL-encrypted MongoDB connections on Dedicated plans running MongoDB
2.6 or later. Even when using SSL, we still recommend placing your application infrastructure and your database in the same
datacenter/region to minimize latency and add another layer of
security.
I did the same thing as you and sent email to ask mongolab for detail. I got the answer, sharing it with you and hope it can help you.
The below is the reply.
As long as your Heroku app and MongoLab database are in the same cloud
region, we consider it safe to communicate between Heroku and
MongoLab, as AWS' infrastructure prevents packet-sniffing within
regions. If you use the MongoLab addon on Heroku this is automatic,
but if you use a deployment provisioned directly at mongolab.com
you'll need to manually select the matching region.
It looks like the connection between heroku and mongolab is in the same region. Both are secured by AWS so I guesss you don't need SSL. If you need it to be very safe, you still need SSL for extra security.
Hope it can help
Scenario: We have our dedicated servers hosted with a hosting provider. They are running web apps, console apps along with the database which is Sql Server Express edition.
The applications encrypt/decrypt the data to/from the DB. We also store the keys in their server. So theoretically, the hosting provider can access our keys and decrypt our data.
Question: How we can prevent the hosting providers to access our data?
We don't want hosting provider's users to just log into Sql Server and see the data.
We don't want an un-encrypted copy of database files in the box.
To mitigate no. 1: Encrypting app.configs to not store plain text DB username and password.
To mitigate no. 2: Turn on EFS on Sql Server data folder. We could use TDE but the Sql Server is Web Edition version and the hosting company is going to charge us a fortune to use Enterprise Edition.
I'd really appreciate if you guys have any suggestions about above.
You can help mitigate it, but prevention is probably impossible.
It's generally considered that if an attacker has physical access to the machine, they own everything on it.
If this is a concern, you should consider purchasing a server, a virtual server, or using a colocation center and providing your own machine or hosting it yourself entirely.
When you purchase a server, virtual server, or colocate your own hardware, the service provider doesn't have an account on your OS. If you use an encrypted file system, and only access your box via SSH (SSL/TLS), then they will not be able to easily access any data on your computer that isn't being sent out to the network.
The only fool proof way is to have your own hardware in your own secure location and bring the network to your box.
It's possible to do database encryption such that the client does the decryption (though if your indexes are sorted, the server obviously needs to be able to figure out relative order of things in the index). I can't think of a link off the top of my head. However, if the client is the web app, there's not much you can do.
There are also various types of homomorphic encryption, but I'm not sure there's anything that scales polynomially. In any case, the overheads are huge.
I'm curious if there's a reason why you don't trust your hosting provider - or is this just a scenario?
If this is something you have to worry about, sounds like you should be looking at other providers. Protecting yourself from your hosting partner seems counterproductive, IMO.