Secure Service Fabric Cluster But Make API Public - security

I am looking into using a service fabric cluster for a service with a public API. When creating a service fabric cluster I have the ability to choose either secured mode and use a certificate, or use unsecured mode.
In unsecured mode, anyone can call the API which is what I want, however it also means that anyone can go to the management page at *northeurope.cloudapp.azure.com:19080 and do anything which is obviously not ok.
I tried using the secure mode with a certificate, and this prevents anyone without the certificate from using the management page, but also seems to prevent anyone calling the API.
Am I missing something simple? How do I keep the management side of the cluster secured, while making the API public so that anyone can call it?
Edit: After looking more carefully it seems to me that the intended behaviour is that as I've configured a custom endpoint when setting up the cluster that I should be able to call the service. So I believe it may just be an error in my code.

Securing a cluster has nothing to do with your application endpoints. There is a separation of concerns between securing the system (management endpoints, node authentication) and securing your applications (SSL, user auth, etc.). There is some other problem here, most likely you have configured the Azure Load Balancer to allow traffic into your cluster on the ports that your services are listening on. See here for more info on that: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/#service-fabric-in-azure

Related

How to implement service discovery in nodejs?

Can somebody please explain how to implement service discovery with nodejs without any frameworks. only with nodejs and express.
All I can found it how to implement this with frameworks like sprint boot (which i don't want to use) on youtube.
What Steps I need to do to implement this. it very will help if example of the implementation.
A service discovery service is a service that provides endpoints to manage URLs or IPs of other services in the same environment. It's good practice because it allows to decouple services from each other. Reason is that with a service discovery system, services do not need to store each other's URLs or IPs. These values can simple be fetched from the discovery service. URLs or IPs can then also be updated at runtime (still with a grace period of course).
A very basic service discovery service would provide endpoints to
register a service instance,
to unregister a service instance, and
to lookup registered instances.
Translated to HTTP verbs:
POST /service/:name
DELETE /service/:name/:id
GET /service/:name
This of course does not take into account,
authentication and authorization to prevent unwanted manipulation
health checks to make sure that all registered services are actually alive and healthy
and how to set up storage so that the service is fast and scalable.
Though, it should give an idea on where to start. Generally i'd advise to use a proven solution like i.e. https://www.consul.io/

Security issues with oauth2-redirect and nip.io

I am developing a small demo API for my organization. I use Azure AD for authentication for the API (and the OpenAPI docs). Everything works perfectly in my local development environment and I don't have the hassle of SSL since the oauth2-redirect is localhost. I am now ready to make my demo accesible inside my organization's network. However, Azure App Registration mandates that a oauth2-redirect link has to be https (or localhost which is why it works perfectly for me). I can understand why, but I am eager to demo my API and so, if at all possible, I would like to avoid the hassle of setting up a reverse-proxy, configuring TLS etc. So my question is - if I use https://10.x.x.x.nip.io/oauth2-redirect what are the security implications of this? I fear they are major unfortunately. I guess nip.io could sniff my authorization if they wanted to?
This is not strictly an answer, since it doesn't discuss the security implications of nip.io. However, if TLS is an issue for a demo it is automatically configured for an Azure App Service. For small demos it is therefore very convenient to deploy as an Azure App Service especially if you protect it behind a private link.

Does Azure App service communicate internally without going out through Application Gateway

I have Azure App Services behind the Azure Application Gateway/Firewall. There are few application that talks between them. Does that applications talk internally(using xxx.azurewebsites.net) or they talk with public domain(mydomain.com)?
Also, how to check these things in logs.
Current configuration:
HTTPSettings: Pick hostname from the backend address has checked.
Probes: pick hostname from backend https settings has checked.
To answer your question, No if your applications are inside azure's network, it usually wont go through the public domain. But it will go through the firewall/gateways and follow the same networking restriction you have defined.
What logs you want to check? if you want to see the application event logs you can do it using scm. You can access it via Diagnostics/Advanced Tools in your azure app services.
You can enable Access Logs in the Application gateway to see all the request that hits Application Gateway. It has the hostname field where you can check how the site is being accessed.
Let me know if you have any further questions.

If we have already implemented the authorization in .Net Core Micro-service API Gateway do we need to implement in all micro services as well?

#here, please help me understanding microservice authentication with API Gateway.
Let's take an example - I have 10 different independent deployed microservices and I have implemented the API Gateway for all of them meaning all the request will be passed through that gateway, also instead of adding authorization/JWt in every microservice I added in API Gateway with this approach all is working fine, but my doubt and question is
1 What if an end user has the URL of deployed microservice and he tries to connect it without gateway (as I don't have the authorization place here, how do I stop this, do I need to add same authorization logic in every microservice as well but that would end in duplicating the code, then what is the use of API gateway.
let me know if any other input required, hoping I explained my problem correctly.
Thanks
CP Variyani
Generally speaking: your microservice(s) will either be internal or public. In other words, they either are or are not reachable by the outside world. If they are internal, you can opt to leave them unprotected, since the protection is basically coming from your firewall. If they are public, then they should require authentication, regardless of whether they are used directly or not.
However, it's often best to just require authentication always, even if they are internal-only. It's easy enough to employ client auth and scopes to ensure that only your application(s) can access the service(s). Then, if there is some sort of misconfiguration where the service(s) are leaked to the external network (i.e. Internet at large) or a hole is opened in the firewall, you're still protected.
API gateway is used to handle cross cutting concerns like "Authorziation", TLS etc and also Single point of entry to your services.
Coming to your question, If your API services are exposed for public access then you have to secure them. Normally API gateway is the only point exposed to public , rest of the services are behind firewall (virtual network) that can only be accessed by API gateway , unless you have some reason to expose your services publicly.
e.g. if you are using Kubernetes for your services deployment, your can set your services to be accessible only inside the cluster (services have private IPs) , and the only way to access them is API gateway. You don't need to do anything special then.
However if your services are exposed publicly (have public IPs) for any reason then you have to secure them. So in short it depends how you have deployed them and if they have public IP associated with them.
Based on your comments below. You should do the authentication in your API gateway and pass the token in your request to your services. Your services will only authenticate the token not redo the whole authentication. This way if you want to update/change the authentication provider or flow , it's easier to do if you keep it in API gateway.

Windows Azure VPN and IP restriction

We integrate with a third-party service where we can run queries which is right now secured using HTTPS encryption and username/password. We send our queries from a service running on the Windows Azure cloud.
The third-party provider wants to migrate towards better security and they have asked us to either
Setup a VPN - which is problematic because for we'd need to use Azure Connect and they'd have to install the client endpoint service on their part.
Provide some IP address where the queries will come from so they can filter out anyone else at the firewall level - which is problematic because AFAIK you cannot fix the IP addresses of the Windows Azure Compute nodes.
Suggest another secure alternative - the only thing I could think of is to set up the VPN with them on a non-Azure server and then tunnel the requests through using Azure Connect - which is obviously extra work for us and also defeats the point of hosting the service on a cloud if it depends on a non-cloud service.
Any ideas?
Can they install the Azure Connect endpoint on another server on their DMZ network? i.e. not the actual server which hosts their service?
Can we somehow provide them with static IPs for incoming queries?
Any other solution that is scalable?
Thanks
If I understand the scenario correctly, your Azure service is a client to a 3rd party service. This scenario may be solved through the use of the Windows Azure AppFabric Service Bus. You would need to install a proxy app in the 3rd party's datacenter that would be responsible for establishing the connection to the service bus. The connection comes from inside the 3rd party's datacenter, so no new incoming holes in the firewall. The connection can handle WCF connections with all its security strengths, and users can be authenticated with ACS.
Here is a starting point: http://msdn.microsoft.com/en-us/library/ee732537.aspx
There is a hands on lab in the Windows Azure Platform Training Kit that explains most of the details that you'll need.
IMHO, HTTPS is already very good; and I don't exactly see how a VPN would make the system any more secure. In particular, VPN is no silver bullet, if your VM is compromised then the VPN connection is compromised too (same for HTTPS). On the other hand, the IP restriction would indeed reduce the attack surface.
Then, using a server outside the cloud is a poor idea indeed. Not only it defeats most of the benefits of the cloud (been there, done that and suffered a lot), but also it also makes the whole thing less secure with more complexity and more attack surface.
Windows Azure does not provide anything that look like a static IP at this point. In our experience, IP addresses for a given service change once in a while even if the service is only upgraded (and never deleted). Static IP addresses have been an important feature request for a long time, Microsoft will probably provide it at some point, but it might still take many months.

Resources