I've got an ASP.NET 6 API that uses the [Authorize] attribute and JWT to authenticate users. Till today it was the only instance that was running on IIS. Now we've been asked to move to docker container + Kubernetes (under Azure) to scale horizontally. How can this work when running on multiple docker instances? do the authentication cookie works correctly in both instances? Or do I have to move authentication to something different from JWT?
Thanks in advance
In multiple instances it will not work. Because, in default mode, data protection keys are tied to an instance. There are various methods to make them sharable across instances as documented in this article
https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-7.0
For your deployment in Azure, best is to use Azure storage with protection through Azure Vault specifically
builder.Services.AddDataProtection()
.PersistKeysToAzureBlobStorage(new Uri("<blobUriWithSasToken>"))
.ProtectKeysWithAzureKeyVault(new Uri("<keyIdentifier>"), new DefaultAzureCredential());
Related
I have two applications:
MVC Site (User-facing Web App secured via OAuth -> Google)
Web API Site ("Private" Web Services)
These are hosted in an App Service Plan in Azure. These web services will only be consumed by my own applications - I don't need to worry about outside consumption. In fact, I specifically don't want outside consumption. My Web App is using OAuth to Google - that shouldn't matter here.
So to get to the heart of my question: My web services currently have no authentication/authorization model in the code but I don't want it just publicly available to anybody. On prem, we just lock this down via IIS using Windows Auth and set the service account for the consuming web app to run as a user that Windows Auth allows access to. I'd like to do the equivalent in Azure.
I understand Azure isn't exactly the same but I have to believe this is possible. I have even gotten my web services locked down the way I want using the settings in the Authentication/Authorization tab (I can try to navigate to it but I only get my Swagger UI once I login with a valid organizational account). So half of my battle is solved but I cannot figure out how to do the other half - the equivalent of setting the service account for my consuming MVC application to run as.
Can I do this via the portal without having to code specifically to this scenario? I'd really like a PaaS-level or IaaS-level solution for the security portion of consuming the above locked-down services. I'm also open to other avenues if I'm going down the wrong path in having a PaaS or IaaS security solution to this problem. I'm not against making code changes - we did have a one-liner in our RestSharp code to engage Windows Authentication, but the bulk of the work/configuration was outside of code and that's what I'm going for here.
If going the IaaS path you can host the application inside of an VM in the exact same way as you did before when running it directly on-top of IIS. The benefit is that you can get running the same way as before but you will still need to manage the VM; i.e install updates and take care of its security.
However, if you want to have a PaaS solution, then you need to modify the code of your front-end application to pass on the authentication token to the back-end API, assuming the back-end accepts the same authentication as the front-end. See https://azure.microsoft.com/en-us/documentation/articles/app-service-api-dotnet-get-started/ as an example on how to pass on authentication information from one app to another.
Alternatively you can use the app identity to make calls to your back-end API. This way the calls are not related to any user but are instead done in the context of the app. See https://github.com/Azure-Samples/active-directory-dotnet-daemon for more details on how to set it up, both configuration and needed code.
If you want to allow your users to sign-in using their Google accounts then you could handle authorization to your API using the app identity (second alternative above), assuming the API is independent of the requesting users identity.
Enabling authentication for a Azure Web App directly through the menus in the Azure Portal adds Azure AD authentication in-front of your application and require your to pass an access token generated by Azure AD to your API for it to work.
I'm trying to publish my web app from VS without no downtime. If you search in Google, you find the official documentation speaking about using slots and do a swap later.
This is a good approach, but I have other problem when I do the swap, logins are lost (look this question: link).
Relevant information in the link:
Session is not linked to Authentication, you're attempting to solve it in the wrong way.
All forms authentication tickets and cookies are encrypted and signed using the data protection layer. The problem you are encountering is due to the encryption keys not being saved, and applications being isolated from each other.
How can I do that? In AWS I had rolling updates...
For more information, I'm using ASP.NET Core with Identity 3.0
Thanks!!
What you're seeing is an azure limitation right now. While Azure Web Sites will share the key ring it sees swap slots as separate applications.
There are a couple of things to try.
First, set a common application name. This will help because every application which shares the keyring is isolated by default; but if they share the application name they can share keys
public void ConfigureServices(IServiceCollection services)
{
services.AddDataProtection();
services.ConfigureDataProtection(configure =>
{
configure.SetApplicationName("my application");
});
}
If that's not enough for azure (I am honestly unsure if hot swaps end up using Azure Web App's shared key folder) you can combine that with using Azure Data Tables for storing the encryption keys - https://github.com/GrabYourPitchforks/DataProtection.Azure/tree/dev
Between those two it should get the encryption keys used to protect identity cookies shared between your apps.
I found a fork for aspnet core 1.0, for those interested:
https://github.com/prajaybasu/DataProtection.Azure/tree/dev/DataProtection.Azure
just like the other one, it stores encryption keys on an azure storage account.
It completely solved my problem.
Starting from blowdart's solution I solved my issue, so thanks.
Andrea
Are you using in-memory session state?
The problem with 'logins' being 'lost' is an architecture issue, not an issue with updating your web app.
Use something like RedisCache for session state. Not only will it persist when you update your application, but it will handle load-balancing on multiple server instances. As it sits you'll probably have this issue when you scale out to more than one server, in addition to when you update your app.
We are thinking of using Windows Azure for simulation. ~100 VM nodes each working on it's problem set and reporting back the result to a Master node.
I have created VM instances from the web UI. In order for this to work, we would need to use Azure API to bring servers up and shut them down once they are done.
Does anyone have any experience with something like this? I am looking for advise, gotchas etc.
thanks.
You sure can do it and I have helped other to make it happen on hundreds on nodes. Take a look at Windows Azure Rest API to configure your role as described here. While others may have other idea, I think the general steps would be as below:
Create a master machine or a webrole to manage your roles using REST API
Create a worker role instance and use it to clone multiple instances as if needed
Use REST API to start and shutdown worker role along with update the instance count when in need
Use Azure Boot Strapper to bootstrap the VM depend on your requirement
Azure REST based Service Management API can work from a web app or a standalone app, so you can also have a web role to make it happen from anywhere in world. This way you don't need any on premise components at all as it will be totally cloud solution. If you need any help on creating web role I sure can help.
You can provision Virtual Machines using Service Management REST API (there's also a managed API on NuGet).
But in your case you might want to consider using Cloud Services (PaaS). With Cloud Services you simply build your application, you package it and deploy it. Then using the portal or the management API you can simply configure the number of instances. There is even a command line tool (csmanage.exe) which allows you to to change the number of instances through the service configuration.
I'm curious to know if this is possible, and if so, is it a good or bad idea?
We are developing an Azure application that is largely centered around worker roles that receive their work on a CloudQueue, and put the results in a CloudBlob, that the client then downloads. The web interface itself is a dead-simple ASP.NET MVC site that throws jobs in the CloudQueue, and builds URLs to download CloudBlobs.
Currently we accomplish this by having a Azure Cloud Project in our solution, which has a Web Role with the UI, and Worker Roles with the actual work.
Could we use Azure Websites to publish and host the UI, which calls back to our Worker Roles? The Azure DLLs are just regular old .NET libraries, I'm assuming Azure Websites won't have a problem with them. So, when we want to update the UI, we just publish with Visual Studio. And when we want to update the Worker Role - which is 300MB+ and has a bunch of nasty dependencies like Crystal Reports - we can build the cloud bundle and update the Cloud Service through the Azure management portal.
This seems to me like doing this would make it easier to update the UI. I think it would also be cheaper to host it, as we won't have to buy a bunch of instances for the Web Role.
If your question is "Could we use Windows Azure Websites*", based on your application architecture, you sure can use Azure Website to deploy your front end and configure all the networking connection properly so you can continue access other Azure Storage services. As you are using mostly Blob and Queue, you can continue use HTTP/HTTPS settings in the Azure websites. You can keep worker role by as it is however if it is very complex to deploy, using Windows Azure VM may be another direction to go.
I could say website deployment could be easier if your web app does not have something complex to configure in web server as websites may not be able to match web server level configuration compare to webrole and Azure VM. Answering "Easier and cheap" could be very subjective as this is all depend on load and distribution so you would have to try and evaluate it.
I'm starting to develop an application using node.js on azure. I'm using everyauth to provide authentication because I want to support lots of different authentication methods. I plan to deploy to Azure. The potential problem I have is that everyauth requires the connect.session helper. Will this work with azure when running multiple instances? Or do I need an alternative session provider?
I have never used Node.js on Azure, but:
everyauth
Looking at the documentation for everyauth there is a method for authenticating against a Windows Azure ACS. See the section entitled Setting up Windows Azure Access Control Service (ACS) Auth in the readme for more information. There are no notes there about it not working on Azure itself so I would infer from that that you can use it on Azure.
connect-azure
There is also a project called connect-azure, which appears to be using connect.session so again I would extrapolate from this that it will work on Azure.
Contact Azure support
If you are already a customer you can contact support for help.
Try it and see
So if you have the Azure environment setup I would definitely say it is worth trying it out.
This was asked a while ago, but I thought I would attempt an answer anyway. It seems that connect-session relies on cookies to maintain the session. Azure has a different load-balancing strategy depending on what you use:
WebRole/WorkerRole - the LB doesn't have any affinity so requests from your clients might end up at different backend instances. This will throw off whatever session management connect is doing. This is a side effect of a distributed cloud architecture: you don't want any backend node to be the source of truth since it can go down. So what you would have to do is figure out how to externalize connect's cookie store and have all backends share it. This way no matter what backend receives the request, it will be aware of the session.
Websites - in this case the LB will actually try to pin a client connection to a given backend instance, so cookie-based sessions may work without any changes. You are sacrificing failover, as described above.