What is the recommended way of specifying machineKey when hosted in Azure - azure

I'm trying to figure out what the is best way to set the MachineKey for a site when hosted in both an Azure Cloud Service Web role and an Azure Web site when using multiple servers.
There seems to be several proposed solution I can find, none seem right to me.
Use session affinity (on by default in Web sites)
While this may work, it feels like a work around. The reason one would choose to have multiple servers to for the purposed of fail-over, but this means clients with an affinity to the broken server cannot run against the other server, does that not somewhat defeat the object of fail-over?
Set the machineKey in a web.config transform
This requires the machine key to be stored under source control, which defeats the object of the encryption using the machine key.
Done have a dependency on the machine key, so you don't care.
It's very easy to miss a dependency because it's used unbeknownst to you within the internals of ASP.NET, and you may not know you have a problem because it's rarely used or you already have server affinity (on by default in Web sites)
I kind of feel I'm missing something, or maybe I'm suffering from Analysis Paralysis and should just get on with it.

Related

What is a good Azure architecture for Web App Services

I have been researching for a couple days and looking at pluralsight courses but I Can't seem to find a decent answer on how to setup a proper Azure infrastructure.
I have a client app, api backend, and a database as a core of my overall application. I know I need 2 different Web App services and an SQL database.
I also have a need to only allow access to all 3 from our company's IP address.
I'm getting lost with all the VNET and VPN talk and I am wondering if that is even required. Is it considered good to do IP restrictions and call it a day? Should I add an Application Gateway infront of the client application none the less?
If VNETs are required, is it a must to do site-to-site? (don't think we have the authority to do that) If not, how do we access the backend services like the database and API if everything is locked down?
Any help is appreciated because there is too much information and I can't seem to make sense of any of it.
Thanks
It depends a lot on both the purpose of your client application, web application and database, as well as the capabilities that currently exist within your organisation. Have you had a look at the references architectures Microsoft has as a starting point ?
If you are looking at a fairly simple application, deployed to Azure with minimal internal only use, then use something like this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn. You can actually simplify that a little further by removing the load balancers etc if you think traffic will be generally low.
If you are looking for an external application that can only be managed internally, you should adopt something similar to this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/n-tier/n-tier-sql-server. Maybe even add a VPN component to the management jump box similar to this architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn.
Even this, however may be too complicated for your use case. If your application is pretty basic, is secured using username/password or identity federation, and has low risk data associated with it, then just the basic web application architecture would do fine, just read through the various considerations here: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/app-service-web-app/basic-web-app

Can access to a heroku postgresql DB be restricted to it's heroku app ONLY?

I've recently migrated an application from heroku to amazon-ec2 because of recomendations from a security consultant. Yet, he didn't know deeply heroku and the doubt remained.
Can access to a Heroku PostgreSQL DB be restricted for it to be accessed only by the application?
Would you recommend Heroku for security critical applications?
This is a deceptively complex question because the idea of "restricted so that it can be accessed only by the application" is ill-defined. If your ultimate goal is simply to keep your data as secure as possible, then Heroku vs. AWS vs. physical servers under lock and key involves some cost-benefit analysis that goes beyond just how your database can be accessed.
Heroku limits database access via authentication. You share a secret (username/password) between the database and the application. Anyone who has that secret can access the database. To facilitate keeping the secret secret, all database access is or should be over SSL.
There are other ways to restrict access in addition to authentication, but many of them are incompatible with a cloud-based approach. Also, many of them require you to take much more control over the environment of your servers, and the more responsibility you have on that front, the bigger the chance that issues totally separate from who can hit the postgres port on your database will sink you.
The advantage in using AWS directly instead of through a paas provider like Heroku is that you can configure everything yourself. The disadvantage is that you have to configure everything yourself. I would recommend you use AWS over a managed service only if you have a team of qualified and attentive sysadmins to configure, monitor and update your environment. Read Heroku's security policy page. Are you doing at least that much to protect your servers in your own configuration on AWS? If not, then you might have bigger problems than how many layers of redundancy are in place around your database.

Multiple Web Sites/Roles on Azure, Impact of staging server

I'm looking to set up two web roles or websites on my Azure Cloud Service.
The websites need to share the same database schema. I use NHibernate ORM, so I have to make sure that both projects are always using the same data model, or else it will cause major problems.
I've researched setting up multiple websites on a single web role (which seems odd to me, can't I just run multiple web roles, each with a single site)?
http://msdn.microsoft.com/en-us/library/windowsazure/gg433110.aspx
Like any good developer, I use a staging server. If I have to manually set the domain name is configuration files, how will azure know not to be sending people who visit that domain to the staging server?! I.E. If they visit blah.foo.com and I have two deployments (staging and production), is IIS going to be able to know only to send people to the production environment?
Please advise on the best way to go about doing this.
First, you can certainly have multiple web roles, each with a single site; however, each role instance will be deployed to different virtual machines. For example, if you do set up two web roles when you deploy this with one instance each then there are two virtual machines you'll be paying for. If you want the SLA to apply to your deployment you'd need to actually set the instance count to 2 for each web role, which now means you have four virtual machines running. By combining web sites onto the same web role you'll cut down on the number of instances you need to run and still get the SLA; however, that option is not without some considersations. The link you provided is how you can set up multiple websites to run on the same virtual machine when deployed. Note that there are some gotchas with using that method. I'd suggest reading Michael Collier's Tips for Publishing Multiple Sites in a Web Role.
Second, if you do NOT need to have a lot of control over the virtual machine (such as registering special components, etc.) you might want to look at Windows Azure Web Sites as an option. You can elect to take one of the paid levels of Web Sites and still have dedicated machines, but you can deploy the websites separately. I will say though, that your requirement of having both sites in lock step because they share the underlying database schema means that it will be less likely you will want to deploy separate changes, but it is still possible.
Finally, regarding the staging server. If you are testing locally you'll want to modify your hosts file to get the host names to point to your local address. Wade Wegner has a post on Running Multiple Websites in a Windows Azure Web Role. Once you deploy to Windows Azure you'd want to change your hosts file back, or comment them out. If you are using the actual idea of the Staging deployment slot you can use the same trick with the hosts file to point to the IP address of the staging deployment when testing.

What is the difference between Azure Web Site and Azure Cloud service

We are looking to host a website (some css,js, one html file but not aspx, one generic handler).
We deployed in as:
1) Azure Web Site
2) Azure Cloud Service
Both solutions work. There is a question though: which way of hosting it is better and why? Second thing: as there might be a lot of traffic - which solution would be cheaper?
Thanks in advance,
Krzysztofuncjusz
You may want to review this article that explains the primary differences. Web Sites are best for running web applications that are relatively isolated (that do not require elevated security, remote desktop, network isolation...). Cloud services are more advanced because they give you more control over web sites while still remaining flexible. And VMs are for full control over applications that need to be installed and configured (like running SQL Server for example).
I think that main difference in abilities to modify VM and possibility to configure scalability. Web sites is something like classic hosting, without ability to login by rdp. Cloud Services allows you to configure VM and if necessary setup scalability and availability.

Is there a way to avoid network traffic on Web API web services hosted in IIS by accessing your services through localhost?

We have added a Web API services layer to our application to help share the code with various product teams at my client's company. I like this as a way of managing versioning and for code organization but I'm concerned about violating Martin Fowlers First Law of Distributed Object Design, namely don't distribute your objects. We can host all of the various products on the same box currently and I was wondering if having the client application access our web services through localhost would allow us to avoid the issues that Martin is calling out. If it was WCF I would configure the end point to use Named Pipes and I guess I'm trying to figure out how to do that in IIS.
If you are hosting all your projects under the same process, it would be possible to go in-memory but I am not sure how much this makes sense. Here is a good example:
Batching Handler for ASP.NET Web API
A related post for the above one
It demonstrates the usage of in-memory hosting the entire Web API pipeline. However, in your case, it seems that this won't work out but might be worth considering.

Resources