Security when connecting to EC2 instance - security

I have just built a cloud using AWS.
It includes a few servers including an RDS.
The RDS server is running an accounting application. I have limited access to this server to a certain security group, I've also set up some group policy.
What I wanted to know is, what security can I put in place to protect the server.. i.e. before users even get logged into it?
Maybe something like a dial in VPN, something that users have to authenticate with before they have the option of accessing the RDS server.

It sounds like this might be a fit for an Amazon private cloud. The pricing is the same as for public instances, the thing you pay for is the VPN access on a per connection / per hour basis.

Related

What are the recommended security settings for setting up an Amazon RDS database [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a node.js app that's connected to Amazon RDS database, and the app itself is hosted on Elastic Beanstalk.
For it to start work I've allowed "all public connections" in the database and set up so that all inbound traffic can access the database. But if someone get the endpoint link to my database they can access it however they want if I understand things correctly, so I'm now asking you how should I set up the security in the settings for my app?
In the client my app will write the users login data and store some information about them, and they will be able to see their own data and the data from people who are their "friends" in the app.
I've read through some of the documentation but as I'm pretty new to all this I have a hard time understanding what would be the best solution for me.
Thanks.
From a network security perspective you want to make sure that only resources that need to talk to your database are able to do so, i.e. you want a "least-privilege" approach on your network layer.
The way to do that in AWS is through security groups and subnets. Since a database should never be publicly accessible, it should be deployed into a private subnet. That will ensure the database can't be exposed to the internet by accident. While ensuring that resources within your network/VPC are still able to send traffic to the database (in principle).
This is where the next layer of security comes in, we try to restrict access to the database to only the instances that really need it. The way to do that is through security groups. These are essentially instance-level firewalls that control who is allowed to access the instance. Your beanstalk instances will have a security group as well. Security groups can use references to other security groups, so ideally you'd configure the database security group to only allow traffic on the database port (e.g. 3306 for MySQL) from the security group of the Elastic Beanstalk instances.
Afterwards only the beanstalk instances will be able to talk to your database, which is a good first step.
Now that only your application servers can access the database server you should think about how you handle authentication from these servers. You should avoid hard-coding your database credentials on the machines and instead store them in some external service such as the AWS Secrets Manager or the Systems Manager Parameter Store. Afterwards you give the instances permissions to fetch the credentials from there by extending the permissions of the instance role. You should then write code to fetch the credentials + connection string from one of these sources in your application.
For additional security ensure that the connection to the database is encrypted in transit using some form of SSL/TLS, the specific settings depend on your database engine.
I'd also suggest you take a look at the security section of the RDS documentation - this has been an overview of the first steps you should consider, but there is always more to do :)
References
Guide on creating a VPC for your RDS DB
SSL/TLS configurations
Security Groups and RDS
Using the Secrets Manager to rotate DB credentials

Azure WebService - MySQL - Redis configuration

I am creating a WebService with C# Core 3.0 that is using MySQL and Redis, but I am not so familiar with Azure so I need advice about configuring everything.
I had MySQL hosted on AWS, but I am transferring it to Azure because I think that performance (speed) will be better on Azure because they will be on same data center. Right?
But, on my MySQL page Host is like '*.mysql.database.azure.com'. That means that every connection will go out of Azure, and than come back? I don't have some local IP for connection? Same question for Redis.
Do I need to configure some local network on Azure and will that impact speed on the app? And, is MySQL a good choice for Azure or should I try with another one?
I am just reading about Azure Virtual Networks. But as I understand it, VN's sole purpose is to isolate elements from the outside network?
You will get better performance if your my-sql instance and your app service are in the same region (basically the same data centre).
The connection string is mysql.database.azure.com, but remember the connection will be a TCP/IP connection, so the DNS lookup will realise that this address mysql.database.azure.com is in the same region (same data center). Then the TCP/IP connection will go to an internal IP.
You could use tcpping in your app service's kudo console to try this and see the result.
The basic rule is that you should group your app and database in the same region for better performance and cheaper cost (as Microsoft doesn't charge traffic within the same region).
Azure Virtual network is for a different purpose. For example, if you have some on premise database servers and you want to call these servers from azure, then VM could be helpful. But for the scenario you described, it is not really needed.
The company I work for has Microsoft azure support included, and if you or your company have support contract with them, you can raise questions directly to them and get really quick responses.

How to secure data of clients in my Azure instance

My company developed a business suite which is not a SaaS platform now. We're in beta mode now and will launch V2 within next 2 months. Currently we are creating instance for interested clients (free for a year) but getting questions that their data is secured. Now, my question is, since we are creating their instances on our Azure platform,is there a way to make sure that we won't be able to access their data anyway?
Thanks in advance!
Some of the security setup you can configure and present are -
Configure firewall rules to restrict access to db based on the originating IP address of each request. You can share the firewall settings that only specific Virtual Machines/Computers have access to the client's database.
Authentication to the database. We can remove any SQL authentication(username/password based) and configure only Azure Integrated security for the applications accessing the database. Best practice would be using service accounts to access the db. You can showcase this too.

Do you need ssl for connection between mongolab and heroku?

Is it secure to have data be sent to free database at mongolab from heroku app.
Data could be like emails, and preferences.
Or do you need ssl, i've read about mongodb ssl.
I've asked around but couldn't find anything specific to mongolab.
From MongoLab's documentation:
Securing communications to your database
You should always try to place your application infrastructure and
your database in the same local network (i.e., datacenter / cloud
region), as it will be the most secure method of deployment and will
minimize latency between your application and database.
When you connect to your MongoLab database from within the same
datacenter/region, you communicate over your cloud hosting provider’s
internal network. All of our cloud hosting providers provide a good
deal of network security infrastructure to isolate tenants. The
hypervisors used do not allow VMs to read network traffic addressed to
other VMs and so no other tenant can “sniff” your traffic.
However, when you connect to your MongoLab database from a different
datacenter/region, your communications are less secure. While your
database does require username / password authentication (with
credentials that are always encrypted on the network), the rest of
your data is transmitted unencrypted over the open internet. As such
you are potentially vulnerable to others “sniffing” your traffic.
Using MongoDB with SSL connections
Available for Dedicated plans running MongoDB 2.6+ only
To further secure communications to your database, MongoLab offers
SSL-encrypted MongoDB connections on Dedicated plans running MongoDB
2.6 or later. Even when using SSL, we still recommend placing your application infrastructure and your database in the same
datacenter/region to minimize latency and add another layer of
security.
I did the same thing as you and sent email to ask mongolab for detail. I got the answer, sharing it with you and hope it can help you.
The below is the reply.
As long as your Heroku app and MongoLab database are in the same cloud
region, we consider it safe to communicate between Heroku and
MongoLab, as AWS' infrastructure prevents packet-sniffing within
regions. If you use the MongoLab addon on Heroku this is automatic,
but if you use a deployment provisioned directly at mongolab.com
you'll need to manually select the matching region.
It looks like the connection between heroku and mongolab is in the same region. Both are secured by AWS so I guesss you don't need SSL. If you need it to be very safe, you still need SSL for extra security.
Hope it can help

Cloud combined with in-house database. How good is the security?

I'm currently performing a research on cloud computing. I do this for a company that works with highly private data, and so I'm thinking of this scenario:
A hybrid cloud where the database is still in-house. The application itself could be in the cloud because once a month it can get really busy, so there's definitely some scaling profit to gain. I wonder how security for this would exactly work.
A customer would visit the website (which would be in the cloud) through a secure connection. This means that the data will be passed forward to the cloud website encrypted. From there the data must eventually go to the database but... how is that possible?
Because the database server in-house doesn't know how to handle the already encrypted data (I think?). The database server in-house is not a part of the certificate that has been set up with the customer and the web application. Am I right or am I overseeing something? I'm not an expert on certificates and encryption.
Also, another question: If this could work out, and the data would be encrypted all the time, is it safe to put this in a public cloud environment? or should still a private cloud be used?
Thanks a lot!! in advance!!
Kind regards,
Rens
The secure connection between the application server and the database server should be fully transparent from the applications point of view. A VPN connection can connect the cloud instance that your application is running on with the onsite database, allowing an administrator to simply define a datasource using the database server's ip address.
Of course this does create a security issue when the cloud instance gets compromised.
Both systems can live separately and communicate with each other through a message bus. The web site can publish events for the internal system (or any party) to pick up and the internal system can publish events as well that the web site can process.
This way the web site doesn't need access to the internal database and the internal application doesn't have to share more information than is strictly necessary.
By publishing those events on a transactional message queue (such as MSMQ) you can make sure messages are never lost and you can configure transport level security and message level security to ensure that others aren’t tampering messages.
The internal database will not get compromised once a secured connection is established with the static Mac ID of the user accessing the database. The administrator can provides access to a Mac id through one time approval and add the user to his windows console.

Resources