I have been researching for a couple days and looking at pluralsight courses but I Can't seem to find a decent answer on how to setup a proper Azure infrastructure.
I have a client app, api backend, and a database as a core of my overall application. I know I need 2 different Web App services and an SQL database.
I also have a need to only allow access to all 3 from our company's IP address.
I'm getting lost with all the VNET and VPN talk and I am wondering if that is even required. Is it considered good to do IP restrictions and call it a day? Should I add an Application Gateway infront of the client application none the less?
If VNETs are required, is it a must to do site-to-site? (don't think we have the authority to do that) If not, how do we access the backend services like the database and API if everything is locked down?
Any help is appreciated because there is too much information and I can't seem to make sense of any of it.
Thanks
It depends a lot on both the purpose of your client application, web application and database, as well as the capabilities that currently exist within your organisation. Have you had a look at the references architectures Microsoft has as a starting point ?
If you are looking at a fairly simple application, deployed to Azure with minimal internal only use, then use something like this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn. You can actually simplify that a little further by removing the load balancers etc if you think traffic will be generally low.
If you are looking for an external application that can only be managed internally, you should adopt something similar to this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/n-tier/n-tier-sql-server. Maybe even add a VPN component to the management jump box similar to this architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn.
Even this, however may be too complicated for your use case. If your application is pretty basic, is secured using username/password or identity federation, and has low risk data associated with it, then just the basic web application architecture would do fine, just read through the various considerations here: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/app-service-web-app/basic-web-app
Related
At the firm I am working currently, we have a lot of microservices, currently, most of them are deployed to Azure. In Azure, service to service authentication is simple: Azure Active Directory is an authorization server, and the service can request OAuth 2 tokens from it using either client credentials or client assertion (with JWT) flow. Then, the service can use this token to authenticate to other services.
In the last few months, we started moving some of our services to AWS. And this makes me wonder - is there an alternative to Azure Active Directory? I could not find something myself, so I thought it is better to ask: What is the recommended way to implement service to service authentication outside Azure? I know you can use Azure Active Directory also outside Azure. I am asking that because I guess there must be other tools out there, maybe with easier integration with AWS.
I didn't mention any programming language (we are using mainly C# here, and a little NodeJS recently) because I feel this question is language-agnostic - I will prefer solution that works well with many languages.
Thank you,
Omer
Amazon has AWS Directory Service.
It's Microsoft AD in AWS Cloud.
I wouldn't know of any AWS services that'll help you with that exact use-case. However, you could solve this by exposing two ports in the application; one for internal requests and one for external. You can use security groups to shield off the internal port for requests from the internet.
Another option that might involve more changes to your setup is to use a gateway. This pattern is used a lot for microservices, an elaborate description can be found e.g. here. The basic concept is that all outside (internet) requests go through a gateway service that allows certain routes and disallows certain other routes. In cases where users have to login, the gateway will usually handle the authentication.
I have a mobile application that communicates with a REST based web-service. The web-service lives behind the firewall and talks to other systems. Currently this web-service requires a firewall port to be opened and a SSL cert generated for each installation. Mobile apps sends login credentials so web-services can login to custom back-end systems.
Recently a customer approached us asking how could we deploy this to 50 offices. As we don't want to say modify every firewall in every office, we're looking for options.. This is a list of possible solutions and my thoughts on each one:
Open firewall port and expose https webservice - This is our current
solution but we dont want to have to contact 50 network admins and explain why we need to do this.
VPN - Too heavy weight, complex and expensive, we only need access
to one server. Does not solve problem as firewall needs to be
modified.
Microsoft Azure Hybrid Connection Manager - This provides a managed
service where the Azure cloud will expose an end point. Azure will
also expect connections from a easy to install application that
lives behind the firewall. When a REST call is made to the cloud
end-point, the request is forward down socket that was initiated by
the software behind the firewall. This does what we want but as its
a Microsoft Solution there might impose other requirements that our
customers might not want. Currently the simple Hybrid Connection Manager is free. But for how long?
Jscape MFT Gateway - Similar to Azure but you can host their server anywhere. Not that expensive but is not opensource.
Netty - A async java library/toolkit where this type of application could easily be build. Client and server apps would need to be build and deployed. Dont know what we dont know about Netty.
MDM, AirWatch, BlackBerry BES - A MDM based solution would work expect that MDM's are centrally managed and are not often in every office where the backend services are located. Airwatch has an AppTunnle but im not sure about the specifics.
At this point the Microsoft and Jscape systems are possible solutions.
But most likely these solutions will require us to modify the mobile software to work around issues such as:
How does the user know which server to login to? A locator service
needs to be built such that, an email address is used to lookup their
office, or they need to select their office location from a list.
While the connection is SSL many company might want some additional protection since network login information will be send down the pipe.
How is load balancing and fail-over managed?
So, at this point i'm looking for more options. The best option would be a commercial product that offers some level of customization. Second, would like a well used open-source product that could be installed in Aws and customized.
Thanks
The best approach we found was to use the PUTTY API and setup a reverse proxy.
I'm currently developing a mobile app which will be pushed world-wide across the app stores. This app uses a WebAPI REST service as the backend which I currently have running on MS Azure in Europe (which backs onto a database also in Europe).
My problem is, I'd like to create multiple Azure WebApi endpoints (i.e. Australia, US, etc for latency reasons), each with their own database which has geo-replication enabled.
Does anyone know a method/product/service I could use which allows me from the app to either:
Connect to a single domain which behind the scenes picks the closest server to the user.
OR
The app itself is able to determine based on a given list the closest server and connect to that?
I've looked at Azure CDN but this is for static content which is great but I need something for dynamic content.
What you're looking at is Traffic Manager. Traffic manager enables that exact scenario, of finding the closest service that hosts your REST API.
Keep in mind though, that the database replication is (for the time being) a thing you have to do yourself, although we do provide you with the tooling and guidance on how.
I'm trying to figure out what the is best way to set the MachineKey for a site when hosted in both an Azure Cloud Service Web role and an Azure Web site when using multiple servers.
There seems to be several proposed solution I can find, none seem right to me.
Use session affinity (on by default in Web sites)
While this may work, it feels like a work around. The reason one would choose to have multiple servers to for the purposed of fail-over, but this means clients with an affinity to the broken server cannot run against the other server, does that not somewhat defeat the object of fail-over?
Set the machineKey in a web.config transform
This requires the machine key to be stored under source control, which defeats the object of the encryption using the machine key.
Done have a dependency on the machine key, so you don't care.
It's very easy to miss a dependency because it's used unbeknownst to you within the internals of ASP.NET, and you may not know you have a problem because it's rarely used or you already have server affinity (on by default in Web sites)
I kind of feel I'm missing something, or maybe I'm suffering from Analysis Paralysis and should just get on with it.
We have added a Web API services layer to our application to help share the code with various product teams at my client's company. I like this as a way of managing versioning and for code organization but I'm concerned about violating Martin Fowlers First Law of Distributed Object Design, namely don't distribute your objects. We can host all of the various products on the same box currently and I was wondering if having the client application access our web services through localhost would allow us to avoid the issues that Martin is calling out. If it was WCF I would configure the end point to use Named Pipes and I guess I'm trying to figure out how to do that in IIS.
If you are hosting all your projects under the same process, it would be possible to go in-memory but I am not sure how much this makes sense. Here is a good example:
Batching Handler for ASP.NET Web API
A related post for the above one
It demonstrates the usage of in-memory hosting the entire Web API pipeline. However, in your case, it seems that this won't work out but might be worth considering.