Security groups on AWS - security

I understand that AWS/EC2 security groups are just like a firewall. But can I ask:
How is this implemented, for you Amazon insiders? Is it software or a hardware device that's off-the-shelf?
What happens within EC2. For example, does the security group stop me from flooding a competing website's HTTP address from within the EC2 environment, by using their private IP address? Can I access their RDP connection on the private address?

Since no one has answered yet - I'll give it a go - I'm not an AWS 'insider' but we have built a cloud management platform on top of it - so we have some experience.
A security group is the same effect as a firewall, and even in some of Amazon's documentation they refer to it as a firewall - but you don't get the same level of control as you would with your own s/w or h/w device - you just get a level of security rule setting functionality.
In a previous business we did something similar for our shared services, and basically it was some hefty hardware firewalls that we admin'd but gave users the ability to set some basic rules for their VM's. I believe AWS is pretty much the same. They have the POWER and the user has LOCAL VM control.
Hopefully someone from Amazon will see this and shed more light for you!
-Ed, digitalmines.com

Related

Disable Microservice initial exposed port after configuring it in a gateway

Hello I've been searching everywhere and did not found a solution to my problem, which is how can I access my API through the gateway configured endpoint only, currently I can access to my api using localhost:9000, and localhost:8000 which is the Kong gateway port, that I secured and configured, but what's the point of using this gateway if the initial port is still accessible.
Thus I am wondering is there a way to disable the 9000 port and only access to my API with KONG.
Firewalls / security groups (in cloud), private (virtual) networks and multiple network adapters are usually used to differentiate public vs private network access. Cloud vendors (AWS, Azure, etc) and hosting infrastructures usually have such mechanisms built in, e.g. Kubernetes, Cloud Foundry etc.
In a productive environment Kong's external endpoint would run with public network access and all the service endpoints in a private network.
You are currently running everything locally on a single machine/network, so your best option is probably to use a firewall to restrict access by ports.
Additionally, it is possible to configure separate roles for multiple Kong nodes - one (or more) can be "control plane" nodes that only you can access, and that are used to set and review Kong's configuration, access metrics, etc.
One (or more) other Kong nodes can be "data plane" nodes that accept and route API proxy traffic - but that doesn't accept any Kong Admin API commands. See https://konghq.com/blog/separating-data-control-planes/ for more details.
Thanks for the answers they give a different perspectives, but since I have a scalla/play microservice, I added a special Playframework built-in http filter in my application.conf and then allowing only the Kong gateway, now when trying to access my application by localhost:9000 I get denied, and that's absolutely what I was looking for.
hope this answer gonna be helpful for future persons in this same situation.

Minimum Network Accessibility for IIS Web Server

I work in a very large, bureaucratic organization and I'm trying to pitch a simple (local) web interface to my team. Given extensive firewall and domain security, I am wondering if this is even possible.
My question is: From a network security perspective, what might prevent IIS from allowing connections from other users on my network?
I believe IIS uses port 80 for default traffic, but it isn't listed as "Listening" when I run netstat -a through command prompt. I do have other ports listening but my fear is they are strictly monitored. Our organization also restricts connectivity between users to shared directories, so I'm wondering if that impacts anything like Windows Authentication in IIS.
I have very little network security experience so thank you in advance to anyone who can shed some light on this!
what might prevent IIS from allowing connections from other users on my network?
local firewall (GPO)
more GPOs regarding IIS or services in general
switch ACLs
switch port privacy
firewall rules
If your company has a network service policy you shouldn't try to circumvent it. It might put your job in danger.

Is Azure Traffic manager is reliable for failover? what are other problems I should be worried about?

I am planning to use Azure Traffic manager to do a failover of my app running on one Azure zone to Azure zone.
I need some suggestion, if that is the correct approach to do a failover ? We have seen issue with Azure that, most of the services in one region goes down for few hours. Although I understand that Azure traffic manager is not associated with the region. But is it possible that Azure traffic manager goes down or that traffic manager endpoint is not reachable although my backend webapp is reachable?
If I am planning to use Azure traffic manager, what are other problems I should be worried about ?
I've been working with TM for some time now, so here are a few issues I haven't seen mentioned before:
Keep-Alive
If your service allows Keep-Alive, then your DNS entry will be ignored as long as the connection remains open. I've seen some exceptionally odd behavior result from this, including users being stuck on a fallback page since they kept using the connection, causing it to remain open indefinitely. If you have access to IIS Manager, you can force Keep-Alive to be false.
Browser DNS Caching
Most browsers have their own DNS cache, and very few honor DNS Time To Live. In my experience Chrome is pretty responsive, with IE and Edge having significant delays if you need them to rollover quickly. I've heard that Opera is particularly bad.
Other DNS Caching
Even if you're not accessing your service through a browser, other components can have DNS caches, and some of them will allow you to manage the cache yourself. This can in theory even depend on ISP's DNS caching, though reports on the magnitude of this vary significantly.
Traffic Manager works at the DNS level, which itself is replicated. However, even then, you should still build in redundancy into your solution.
Take a look at the Azure Architecture Center under "Make all things redundant" and you will see a recommendation for Traffic Manager:
consider adding another traffic management solution as a failback. If
the Azure Traffic Manager service fails, change your CNAME records in
DNS to point to the other traffic management service.
The Traffic Manager internal architecture is resilient to the failure of any single Azure region. So, even if a region fails, Traffic Manager should stay up. That applies to all Traffic Manager components: control plane, endpoint monitoring, and DNS name servers.
Since Traffic Manager works at the DNS level, it doesn't have an 'endpoint' that proxies your traffic--it uses DNS to direct clients to the appropriate endpoint, and clients then connect to those endpoints directly. Thus, an unreachable endpoint is an application problem, not a Traffic Manager problem.
That said, if the Traffic Manager DNS name servers are down, you have a serious problem. You DNS resolution path will fail and your customers will be impacted. The only solution is to either accept the risk (small, but can never be zero) or have a plan in place to use another DNS system, either in parallel or failover. This is not a limitation of Traffic Manager; you could say the same about any DNS-based traffic management system.
The earlier answer from DornaDigital is very good (other than the first point which suggests DNS caching will protect you through a name server outage--it won't). It covers some important points. In short, DNS-based failover works well for new sessions. Existing clients may have to refresh or even close their browser and reconnect.
I also agree with the details provided dornadigital.
There are considerations for front end applications as well. The browsers all have different thresholds for how long they maintain persistent connections. Chromium, for example, currently maintains a connection unless there is inactivity for 300 seconds.
In our web applications, we are detecting the failover by the presence of a certain number of failed requests to the endpoint. After requests begin failing, we pause requests for 301 seconds to allow the connection to reset. This allows the DNS change from the traffic manager to be applied to subsequent requests. We pop up a snackbar to indicate to the user that we are having an issue and display the count down when requests will resume. Similar to Gmail when it has an issue connecting to their servers.
I hope that gives you one idea on how to build some redundancy into your web apps.
I disagree with Jonathan as his understanding of the resiliency of the Traffic Manager service is in disagreement with Microsoft's own documentation on the subject.
When you provision Azure Traffic Manager, you select a region in which to deploy the service. I (correctly) inferred this to assert if said region were to fail, the Traffic Manager service could also be impacted and in turn, your application solution would not properly fail over to the secondary region.
According to Microsoft's Azure Application Architecture Guide, under "Make all things redundant", a customer should deploy Traffic Manager into more than one region:
Include redundancy for Traffic Manager. Traffic Manager is a possible failure point. Review the Traffic Manager SLA, and determine whether using Traffic Manager alone meets your business requirements for high availability. If not, consider adding another traffic management solution as a failback. If the Azure Traffic Manager service fails, change your CNAME records in DNS to point to the other traffic management service.
Azure Application Architecture Guide - Make all things redundant
My thought and intention is to not deploy Traffic Manager within the primary service region, but instead to deploy it into the secondary (failover region) and a tertiary (3rd) region as a backup.

Azure Multi-Site VPN from One Location

We have a client who wants to connect their premises to Azure. Their main hindrance at this point is determining the best way to connect to Azure given their current connectivity configuration. They have two redundant ISP connections going to the head office for internet access. They want to be able to configure a VPN connection to Azure that would operate in a similar way i.e. if ISP A went down it would seamlessly use ISP B and vice versa. The normal multi-site VPN configuration does not fit this since there is one local network behind which means the network behind separate VPNs over each ISP would have overlapping IP address ranges which is not supported. Is such a configuration possible? (See diagram below)
Either that or is there a way to abstract the two ISP connections onto one VPN connection to Azure.
They’re currently considering using a Cisco ASA device to help with this. I’m not familiar with the features of this device so I cannot verify if it will solve their issue. I know there is also a Cisco ASAv appliance in the Azure marketplace don't know if that could also be a part of a possible solution if they went with such a device.
required vpn configuration
The Site-to-Site VPN capability in Azure does not allow for automatic failover between ISPs.
What you could do are the following
- Have automation task created that would re-create the local network and gateway connection upon failover. Manual and would take some RTO to get it up and running
- Use the Cisco CSRs to create a DMVPN mesh. You should be able to achieve the configuration you want using that option. You would use UDRs in Azure to ensure proper routing
I havent done it in Azure, but here is what you do in AWS (And I am sure there would be parallel in Azure)
Configure a "detached VGW" (virtual Private gateway) in aws. Use DMVPN cloud to connect CSRs to multi-site on-prem.
Also, for failover between ISPs you could have a look at DNS load balancing via a parallel to AWS's Route 53 in Azure.
Reference thread :
https://serverfault.com/questions/872700/vpc-transit-difference-between-detached-vgw-and-direct-ipsec-connection-csr100

PCI-DSS 1.3.3/1.3.5, restricting outbound access from DMZ to Internet

we are in the process of obtaining for a PCI Level 1 and I'd really appreciate if anyone can help shed some light on the PCI-DSS 1.3.3 & 1.3.5 requirements which states:
1.3.3 - "Do not allow any direct routes inbound or outbound for traffic between the Internet and the cardholder data environment"
1.3.5 - "Restrict outbound traffic from the cardholder data environment to the Internet such that outbound traffic can only access IP addresses within the DMZ."
Right now, we are utilizing a Juniper SRX firewall and have webservers in DMZ, with mysql db servers in Trusted.
For Trusted, we just finished locking down all egress to public and had to setup a proxy server in DMZ that grabs updates (yum, clamav, waf-rules, etc ...) to get the updates from.
But we didn't really expect DMZ to also require a complete lockdown of egress as we've done on Trusted. And I do find this a bit of a challenges (unless I'm mistaken) to do an egress lockdown on DMZ, as our proxy also lives there and needs an outbound access to public for grabbing updates and what not. Whitelisting them via IP's are challenging because 3rd party vendors have ever-changing IP's.
So my question is, just exactly how much "restriction" is required? For our Trusted, we have a "deny-all" egress and a whitelist of select IP addresses that it can access. Does DMZ also require this? Or can DMZ just have "deny-all" based on ports, which would make things a lot easier, as we won't have to worry about ever-changing IP addresses of mirrors and 3rd party services.
I found some proxy appliances that does intelligent filtering based on "host names", (in other words, dynamic IP whitelisting) but they do seem to cost quite a bit of money.
As you can see, I'm looking for some answers, our auditor isn't much of a help, he just says it needs to be locked down. If anyone here have experience with PCI auditing, I'd love to hear what you have to say.
If you have restricted inbound and outbound access to your DMZ and there is no direct access from the DMZ to the Internet then you have met the requirements.
By using a proxy, most QSAs will agree that you have removed the direct access. If there are services for which proxies aren't available then you could either remove them from the same DMZ as the cardholder data environment (e.g. if they are not part of the immediate service) or discuss this with your QSA. It's possible you will need to implement compensating controls or look at other creative solutions.
You will need to convince your QSA that these are legitimate relaxations to the restrictions. This is really something where they should be able to look at your documentation and implementation and give you a straight yes or no.
As with many of the PCI requirements there is flexibility in the interpretation. You can find more about the intent of each requirement in this document: https://www.pcisecuritystandards.org/documents/navigating_dss_v20.pdf

Resources