Zero Downtime rotation of Eclipse Hono Auth Server Shared Secret - eclipse-hono

We're operating Eclipse Hono and would like to perform zero-downtime updates on all components in our cluster.
For authentication between the different Eclipse Hono components we use the Hono Auth Service.
There we configured a shared secret (HONO_AUTH_SVC_SIGNING_SHARED_SECRET) to be used to for signing the issued tokens.
Consuming services (e.g. Command Router / Mongo DB Device Regsitry) are configured with the same secret.
When changing the shared secret we simultaneously need to restart all instances of the mentioned microservices, which leads to a short downtime.
If we would perform a rolling update, the old instances would not validate the issued tokens of instances already running with the new shared secret.
Has anyone the same issue, or knows how to perform a zero-downtime update?
One option to solve our problem would be the possibility to configure next to the HONO_AUTH_VALIDATION_SHARED_SECRET another secret (HONO_AUTH_VALIDATION_SHARED_SECRET_FALLBACK) which would be tried if the primary fails.
Like this we could perform a rolling update of all components without downtime.
The usage of a certificate instead of the shared secret has as far as I can see the same restriction.
Thanks
Chris

I also do not see any option to cycle the shared secret based on the current implementation without incurring any downtime.
For this to work, Hono's components would need to support configuration of multiple shared secrets for validation of the tokens, as you correctly pointed out. Maybe you want to open an issue for this with Hono?

Related

Best practises storing credentials/secrets across devices/teams

TL;DR
An issue I've rarely seen addressed is storing of secrets/credentials across devices/teams.
Context:
There are countless questions and solutions storing credentials, API keys, secrets, etc for devices or backend servers using secure storage mechanisms or as environment variables.
Below are a few solutions specifically designed for deployed systems/apps, store on a device or credentials for a server.
Process env.
Device Secure Storage
Hashicorp (paid solution)
AWS Secret manager for AWS projects
Firebase Secret manager, pay per read
All of these are specific towards an active implementation or to a deployed device/server but non provide access to developer(s) e.g. running local emulator with e.g. Stripe webhook integration.
2 scenarios to illustrate my point and emphasize the problem:
Scenario 1:
An on-the-move freelance developer working on a backend / mobile app project. Primary workstation is a Windows PC, but frequently uses Macbook for travel and work.
Here, an issue with credentials would be: do one store this in VCS, e.g. Github? Surely, that would be easiest but it is not recommended to do this for several security reasons. Alternative is to copy, electronically or physically to 'new' device.
Scenario 2
A team of 3 are working on a project. Each works on their own use-cases. 2 require credentials for an online service. Credentials are shared physically or electronically. During development, credentials are changed (for whatever reason) and needs to be distributed to team members to finish.
Same issue, does one commit these credentials to VCS or share them electronically/physically.
Question
What are common/best practises sharing api keys/credentials/auth tokens across teams/devices?
The only possible solution I found addressing these needs is using git-secret

Sharing my read-only Azure App Configuration Connection String in a public repo

I'm developing an application and I want it to be open-source.
In production, the application is using the Azure Key Vault Service only to store the database connection string. The connection string is stored on an Environment variable of the production server.
In local, I'm using an InMemory Database from EntityFramework. No sensitive data is accessible.
In production too, the application is using the Azure App Configuration Service. While being able to update the configuration of an already running application, it also allows me to centralize the configuration data of my application.
In local, I'm using the Azure App Configuration Service too. The READ-ONLY connection string is stored in my User Secrets.
And that's the point I'm struggling with. Is it considered a bad practice to share the READ-ONLY App Configuration Connection String on a Github or something else public ? Even if I don't store any sensitive data ?
The Key Vault Service is especially designed to safety store the sensitive data, so in theory the App Configuration Service doesn't have any sensitive data available.
But I can't find any relevant documentation on that topic, and the fact that every tutorials I can find are storing the connection string in the user secrets is warning me. How can I share my configuration in a safety way to make my project open-source ?
From security perspective you are violating principle of least privilege, giving read access to public that they don't need.
This could raise several risks:
You or someone else maintaining the App Configuration might "forget" about public read access and put vulnerable data there
An attacker might exploit a security bug in App Configuration itself and escalate read-only permission to read-write, which would not happen if they didn't have read-only access in the first place
You might think that probability of that happening is marginal (which is probably the case), but it is there and in security we always stay on the safe side - that's why we have the principle mentioned and it is indeed generally considered bad practice to violate it.
Finally, we always need to choose between usability and security, so in the end you might willfully agree to slightly less security if this makes your life easier and potential trouble from the risks does not scare you.
In case you would like not to expose the connection string you can think about:
abstracting configuration fetching in a similar way you did for secrets, so that production app would use App Configuration while for local development you can use InMemory database
replacing connection string with Terraform script so that you or any other developer can spin up and populate a dedicated App Configuration instance for local development purposes

.NET Core Dependency Injection and services that utilize frequently rotated authorization keys

Issue Summary
I have multiple ASP.NET Core applications that connect to Azure resources such as CosmosDB, Azure Storage Queues, Azure Event Hubs, etc. All of these resources can utilize Shared Access Signature (SAS) tokens for authentication. These tokens expire which presents a problem when my application starts up and initializes the service once upon startup via services.AddSingleton<T>() (or a similar option).
For example, what I typically do is read the SAS token from a file upon startup (likely mounted to my pod as a volume in Kubernetes but I am not sure that's terribly relevant). That SAS token is then provided to an Azure Storage Queue Client constructor, like this:
string sharedAccessSignature = File.ReadAllText(pathToSasToken);
services.AddSingleton<Azure.Storage.Queues.QueueClient>((sp) =>
{
return new Azure.Storage.Queues.QueueClient(queueUri,
new AzureSasCredential(sharedAccessSignature),
new Azure.Storage.Queues.QueueClientOptions()
{
MessageEncoding = Azure.Storage.Queues.QueueMessageEncoding.Base64
});
});
Unfortunately, I think this means once my SAS token expires, my QueueClient will no longer be able to connect to my Azure Storage Queue without restarting my whole application. Somehow, I need to re-read an updated SAS token from my file while I remain running. (I have another process running in my cluster that provides SAS tokens to my pods).
Possible Solutions
I figure the IOptionsMonitor approach could be useful but unfortunately, the SDKs for these clients don't accept an IOptionsMonitor<T> in their constructors so they don't seem to be capable of re-reading new tokens at runtime -- at least not using IOptionsMonitor.
Another approach could be to use Transient or Scoped service lifetimes but that requires I use the same service lifetimes in my whole dependency chain... So if I have a singleton like a HostedService running, I cannot resolve a Transient or Scoped service from that without unpredictable results (AFAIK). (Update 12/31/2021 - This is actually not true. Microsoft provides guidance on how to consume a scoped service in a HostedService which is actually a good example that demonstrates how one can use Scoped services and manage the lifetimes on your own).
I could also just manually re-create my clients as my code is running but that seems to defeat the purpose of using the .NET service provider and DI pattern.
Am I missing an obvious solution to this that I'm just not seeing in Microsoft's documentation?
I think you're missing Managed Identities. Rather than trust on SAS tokens, you assign a Managed Identity to your ASP.NET app, and grant access to this identity to connect to the required services.
Benefits:
no need to redeploy / acquire new SAS token when it changes / expires
external users won't be able to impersonate this identity (if someone get access to the SAS token, they will be able to use it outside the scope of your app)
More info:
https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-managed-identity
https://learn.microsoft.com/en-us/azure/cosmos-db/managed-identity-based-authentication
https://learn.microsoft.com/en-us/azure/stream-analytics/event-hubs-managed-identity
https://cmatskas.com/setting-up-managed-identities-for-asp-net-core-web-app-running-on-azure-app-service/

how to share Azure mobile service tokens across different web app instances

I am planning to have multiple azure mobile service instances, so the first requirement I have is to share the access token of authenticated user across different app instances. I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine, and placing it to blob storage is not recommended for production apps. What is the possible solution I have at this time?
I have read the blog you mentioned about App Service Token Store. As mentioned about where the tokens live:
Internally, all these tokens are stored in your app’s local file storage under D:/home/data/.auth/tokens. The tokens themselves are all encrypted in user-specific .json files using app-specific encryption keys and cryptographically signed as per best practice.
I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine.
As Azure-runtime-environment states about the Persisted files that an Azure Web App can deal with:
They are rooted in d:\home, which can also be found using the %HOME% environment variable.
These files are persistent, meaning that you can rely on them staying there until you do something to change them. Also, they are shared between all instances of your site (when you scale it up to multiple instances). Internally, the way this works is that they are stored in Azure Storage instead of living on the local file system.
Moreover, Azure app service would enable ARR Affinity to keep a client subsequent requests talking to the same instance. You could disable the session affinity cookie, then requests would be distributed across all the instances. For more details, you could refer to this blog.
Additionally, I have tried to disable ARR Affinity and scale my mobile service to multiple instances, then I could always browser https://[my-website].azurewebsites.net/.auth/me to retrieve information about the current logged-in user.
Per my understanding, you could accomplish the authentication/authorization by yourself to use auth middle-ware into your app. But, this requires more works to be done. Since the platform takes care of it for you, I assume that you could leverage Easy Auth and Token Store and scale your mobile service to multiple instances without worrying about anything.

Ansible vault password file

I'm was thinking, since we already have a secret file that we use to access the servers (the ssh private key), how much of a security risk would be to use this file as the key file for the vault?
The benefit would be that we only have to secure the ssh private key instead of having another key for the vault.
I like your thought of reducing secrets, but I have some concerns of using the ansible private key.
Scenario
Ideally, the private key you are mentioning is only existing on your management machine, from which you run your playbooks. The way I see it is that the more this key is distributed among other machines/systems, the more likely it is that it gets compromised. The ansible private key usually gives access to root on any provisioned machine in your system, which makes it a very valuable secret. I never provision the ansible private key with ansible itself (which would be kind of chicken-egg anyways, at least on the first management machine).
Problem
One potential problem I see with that approach is when developing roles locally, e.g., with vagrant.
You would need to use the private key from your management system locally to decrypt the secrets and run your playbooks against your vagrant boxes.
Also, any other developer who works on the same ansible project would need that private key locally for development.
Potential workaround
My premise is that the private key does not leave the management server. To achieve that you could develop your roles in a way that for local development you do not need any secret decryption, e.g. create a local development dev counterpart for each production group which uses only non-encrypted fake data. That way you would only need to decrypt secrets on your management machine and won't need the private key locally, but of course this leads to a higher development effort of your ansible project.
I always try to use this approach anyways as much as possible, but from time to time you might find yourself in a scenario in which you still need to decrypt some valid api key for your vagrant boxes. In some projects you might want to use your ansible playbooks not only for production servers, but also to locally provision vagrant boxes for the developers, which is usually when you need to decrypt a certain amount of valid secrets.
Also worth mentioning, with this approach changes to the production secrets could only be made directly on the management server with the private key.
Conclusion
All in all I think that while it would be theoretically possible to use the private key as vault password, the benefit of reducing one secret is too small compared to the overhead that comes with the extra security concerns.

Resources