I've got an azure sftp container instance properly set up finally but I ran into a wall while configuring security for it (much like the person here).
My basic flow is this:
PIP on Azure ->
-> Load balancer using PIP to be reached by the wider web ->
-> Load balancing rule to backend subnet ->
-> SFTP container group living on that subnet ->
-> SFTP container in that group
Nothing special and I verified before associating the NSG that the network is operating as intended. Connection to the SFTP server is functioning properly. The problem is, after associating the NSG with the container group's subnet, I was still able to connect to it without any configured rules. Even after applying a rule # priority 100 to deny all traffic, to rule out something I may miss from the default rules, I can still get in.
After reading how NSG flow logs don't include container instances, I'm torn between believing users have NSGs working with container groups but are missing logs, and the possibility that NSGs don't work with container groups at all. If anyone has any guidance on properly using NSGs here, please let me know. Otherwise, if there's another tool I should be using, please recommend it (Azure Firewall is included in the container group tutorial, but I believe completely overkill for what I need and also prohibitively expensive).
EDIT: Adding picture of NSG rules -
After my validation, currently, the NSG associated with the ACI subnet does not work in this scenario for the SFTP container service behind an Azure load balancer. This NSG rule does not block the client's public IP address and it works like without it.
As a workaround, you could restrict the SFTP access with NGINX reverse proxy like this blog or add a service like Azure Application gateway reverse proxy to direct your public-facing traffic to your backend instance.
Related
I am currently trying to learn azure cloud, I have worked with AWS before so may be trying to carry over some concepts here.
I need to know how we can configure a logical firewall to allow traffic from an azure load balancer to and VMs(scale sets or backend pools)?
I was able to do this between different VMs by assigning the VMs to different application security groups and allowing respective traffic from those groups in the network security group. I found the service tag 'AzureLoadBalancer' as an option in NSG rules but it seems that is only for allowing traffic from healthprobes and not from actual load balancer (also there is no option to select a certain load balancer). In the end I had to allow traffic from the public ip of the load balancer to the VNET to get the load balancer to work.
I hope there is a logical way to do this and if there is I am not sure what I am missing here, would appreciate anyone who could help here.
Normally you wouldn't want to firewall traffic from the Azure Load Balancer as it's a load balancer so it needs to be able to reach your endpoints. I'm not quite sure on what you are trying to achieve here. You might be able to simply micro-segment your endpoints on different subnets and apply different NSGs (with different allow/deny rules) on the subnet level. Otherwise an actual firewall would be required between your Azure Load Balancer and endpoints if you need L7 inspection for example.
I would like to use multiple egress IPs for an Azure Container Group where each container instance in that group uses one of those IPs.
Something like a NAT gateway might work, but apparently that is not supported.
I also read there may be a workaround with Azure Firewall, but can't find resources that describe this.
What is the recommended solution for my use-case?
Edit: Also noting that our containers are windows-based. Which may add further limitations to our options.
Since Windows containers are not yet supported for deployment in VNET we cannot approach routing egress traffic manually.
References:
Virtual networking for Container group-Limitations.
Container groups and the types of scenarios they enable.
Steps to configure a container group in a virtual network integrated with Azure Firewall.
Configure IP addresses for azure firewall.
I want to enable traffic from my webapp (that sits inside the VNET and has its private IP) to Application Gateway (that is deployed to the same VNET and has NSG attached to its subnet).
How can I do it?
If I add webapp outbound ip to NSG as allowed - traffic works fine, but I do not want to hardcode this ip.
If I add "Internet" service tag it works as well, but it is too broad for my taste.
I could not find any other relevant service tags for me (tried "AppServiceManager", "AppService" and "AppService.AustraliaEast").
Also checked this document (and had to update the filename to last Monday! :) ) but could not find the IP that worked for me (52.187.231.76).
Ideal solution would be to allow only VNET traffic, but this did not do the trick as well... All ServiceEndpoints are there.
Checked with Azure support. Unfortunately there is no service tags available to do this yet.
Workaround - to manually add security rules for each application that supposed to access Application Gateway to allow Outbound IPs.
To do so - go to azure portal, to the application that needs to be able to access App GW. Go to properties blade and copy Outbound IP addresses. Then go to NSG and create a new inbound security rule to allow access from all of those IPs (at least it can be 1 rule).
According to Azure support those IPs should not change unless you recreate the whole webapp and the app can only cycle through those IPs.
I have 2 AWS EC2 instances living inside 2 different subnets of my vpc.
I would like to allow the ruby app running on the first instance (say App#1) to call the endpoints of the app (say App#2) running on the 2nd instance.
I would also like my users to directly call the endpoints of App#2 from their browser.
Here is what I have tried (and mostly failed):
[Sucess!] I added the known IP addresses of my users to the inbound rules of Load Balancer Security Group of App#2 and have confirmed that they can access App#2 endpoints from their browsers.
[Fail!] I added the Load Balancer Security Group ID of App#1 to the inbound rules to the Load Balancer Security Group of App#2. But my logs tell me App#1 cannot access the endpoints of App#2.
[Fail!] I added the VPC Security Group ID of App#1 to the inbound rules of the Load Balancer Security Group of App#2 - nope, still doesn't work.
(Somehow, when I launched the instance for App#1, aws automatically created 2 security groups for this instance - one for VPC and one for load balancer... I have no idea why/how this happened...)
[Fail!] I added the CIDR for the subnet App#1 was in to the inbound rules of the Load Balancer Security Group of App#2. Still no joy.
[Success...Sort Of] I assigned an elastic IP for the instance running App#1 and added that to the inbound rules of the Load Balancer Security Group of App#2. This works but I would rather not use this method since I would like to elastically scale my App#1 in the future and I do not know how to automatically assign more elastic IPs for the new instances when they spin up, add them to the inbound rules, and then somehow remove them when they shut down.
I feel like there has got to be a really clean solution to this problem and I am probably missing something painfully obvious. Can someone please give me a hint?
Any help would be appreciated!
It sounds like you might be using the public IP address of your load balancer, so it looks like the traffic is coming from the outside. Try using the private IP/DNS if there is one, or setting up a second, internally-facing load balancer.
So App#2 is in public subnet, App#1 is in private subnet. For example, the diagram will be something like:
Internet => LB#2 => App#2:80 (in public subnet) => LB#1 => App#1:4567 (in private subnet)
Let's open all inbound rules in all instances and loadbalancers, check if you can access it via internet,
then apply security groups on each layer each time, don't change all of them at same time.
Let me know which layer has issue.
So we are starting to move to the cloud and our biggest concern is security, as it should be. The thing that I am not sure about is how to secure the end points from public (interent) access? Is this even possible or is there something else we can do to keep the environments in Azure out of the public eye?
This question is likely better suited to server fault. But until then...
In Azure IaaS V1, you can specify IP based ACLs (access control lists) to restrict inbound traffic.
In both IaaS v2, you can leverage NSG (network security groups) to help restriction trick into and out of specific VMs or virtual network sub-nets.
If you are using Azure Express Route (a leased line into an Azure facility), the VMs can be addressed directly from within the virtual network connection and don't need to have publically exposed endpoint.
Then there's also all the usual options such as securing the connections on the VMs themselves. :)
If you are using Azure Resource Groups along with your VMs (which is available on the new portal) you cannot use endpoints because it's not available there, so you should do the following to open up the HTTP port or ANY other port:
(Sign in to your account on the new portal)
1- Select the VM that you want to manage ports on.
2- In settings, click on Network Interfaces and select your network.
3- Go to Network Security Group and select your group.
4- Add Inbound or Outbound security rules depending on what you need.