obscure azure storage account name - azure

Is there any merit in creating an obscure azure storage account name by using the max number of random chars/nums that is allowed when creating one from the portal?
I know that they are still going to be publicly visible and accessible with the keys but is there any benefit in this? Admin is going to be trickier from the portal having accounts with randomly generated names naturally. Is there such a malicious practice of "scanning" storage account names to find ones that exist to potential abuse or is there mechanisms to prevent that? I am aware that obfuscation does not equal security and only means to delay and not prevent but I can't see any other way to secure a storage account to a specific IP address/range presently.
Is this something you would/wouldn't recommend to do in practice? Am I just being overly cautious and in fact the access keys on their own are indeed a good level of security.

I am no security expert, but IMHO, you are being over cautious... with the name, that is..
Having said that, it is always a good security policy to rotate the access keys at a given frequency. The very reason why these services support primary and secondary access keys is to enable the scenario for key rotation... and think of it as similar to systems enforcing a user to change their password every X days.
The frequency could be anything you prefer, or your in house security experts suggests as acceptable.
Although initially needs effort investment, automating the key rotation process is obviously best.

Related

Azure Function Host key limit?

The Azure Function documentation is clear on using Host and/or Function keys to provide "api key" authorization. However, I can't find anything that indicates if there is a limit on how many keys can be created on a particular function or function app.
I would like to share a unique key with each tenant in a multi-tenant application so I can update or revoke them on a per-tenant basis. However, this approach will only work if I am able to generate hundreds (or potentially thousands) of keys.
Can anyone confirm any known limits on the number of keys that can be generated on a function app?
There aren't any strict limits imposed by the runtime, but we can't make any guarantees that this would be performant at scale.

Is SID considered as sensitive?

Are Security Identifier (SID) in Windows or active directory domains considered as sensitive information? Is it possible for a hacker to use that information for malicious purposes?
I would not consider the SID anymore sensitive than a GUID. The SID is used to identify objects in ACLs. However, there are well known SIDs for builtin groups and accounts that make certain objects easily discoverable.
For instance, if you were to rename the built in Administrator account in AD in an attempt to hide or obscure it, someone could still locate it simply based off its SID. So a hacker or someone with malicious intent could leverage the data. But if they are already able to get a hold of the data, you probably have bigger things to worry about. Anyone with access to AD can query to obtain SID information for other users/groups/etc.

Azure blob storage: Shared access signature for multiple containers?

I'm creating an application that will be hosted in Azure. In this application, users will be able to upload their own content. They will also be able to configure a list of other trusted app users who will be able to read their files. I'm trying to figure out how to architect the storage.
I think that I'll create a storage container named after each user's application ID, and they will be able to upload files there. My question relates to how to grant read access to all files to which a user should have access. I've been reading about shared access signatures and they seem like they could be a great fit for what I'm trying to achieve. But, I'm evaluating the most efficient way to grant access to users. I think that Stored access policies might be useful. But specifically:
Can I use one shared access signature (or stored access policy) to grant a user access to multiple containers? I've found one piece of information which I think is very relevant:
http://msdn.microsoft.com/en-us/library/windowsazure/ee393341.aspx
"A container, queue, or table can include up to 5 stored access policies. Each policy can be used by any number of shared access signatures."
But I'm not sure if I'm understanding that correctly. If a user is connected to 20 other people, can I grant him or her access to twenty specific containers? Of course, I could generate twenty individual stored access policies, but that doesn't seem very efficient, and when they first log in, I plan to show a summary of content from all of their other trusted app users, which would equate to demanding 20 signatures at once (if I understand correctly).
Thanks for any suggestions...
-Ben
Since you are going to have a container per user (for now I'll equate a user with what you called a user application ID), that means you'll have a storage account that can contain many different containers for many users. If you want to have the application have the ability to upload to only one specific container while reading from many two options come to mind.
First: Create a API that lives somewhere that handles all the requests. Behind the API your code will have full access to entire storage account so your business logic will determine what they do and do not have access to. The upside of this is that you don't have to create Shared Access Signatures (SAS) at all. Your app only knows how to talk to the API. You can even combine the data that they can see in that summary of content by doing parallel calls to get contents from the various containers from a single call from the application. The downside is that you are now hosting this API service which has to broker ALL of these calls. You'd still need the API service to generate SAS if you go that route, but it would only be needed to generate the SAS and the client applications would make the calls directly with the Windows Azure storage service bearing the load which will reduce the resources you actually need.
Second: Go the SAS route and generate SAS as needed, but this will get a bit tricky.
You can only create up to five Stored Access Policies on each container. For one of these five you create one policy for the "owner" of the container which gives them Read and write permissions. Now, since you are allowing folks to give read permissions to other folks you'll run into the policy count limit unless you reuse the same policy for Read, but then you won't be able to revoke it if the user removes someone from their "trusted" list of readers. For example, if I gave permissions to both Bob and James to my container and they are both handed a copy of the Read SAS, if I needed to remove Bob I'd have to cancel the Read Policy they shared and reissue a new Read SAS to James. That's not really that bad of an issue though as the app can detect when it no longer has permissions and ask for the renewed SAS.
In any case you still kind of want the policies to be short lived. If I removed Bob from my trusted readers I'd pretty much want him cut off immediately. This means you'll be going back to get a renewed SAS quite a bit and recreating the signed access signature which reduces the usefulness of the signed access policies. This really depends on your stomach of how long you were planning on allowing the policy to live and how quickly you'd want someone cut off if they were "untrusted".
Now, a better option could be that you create Ad-hoc signatures. You can have as many Ad-hoc signatures as you want actually, but they can't be revoked and can at most last one hour. Since you'd make them short lived the length or lack of revocation shouldn't be an issue. Going that route will mean that you'd be having the application come back to get them as needed, but given what I mentioned above about when someone is removed and you want the SAS to run out this may not be a big deal. As you pointed out though, this does increase the complexity of things because you're generating a lot of SASs; however, with these being ad-hoc you don't really need to track them.
If you were going to go the SAS route I'd suggest that your API be generating the ad-hoc ones as needed. They shouldn't last more than a few minutes as people can have their permissions to a container removed and all you are trying to do is reduce the load on hosted service for actually doing the upload and download. Again, all the logic for handling what containers someone can see is still in your API service and the applications just get signatures they can use for small periods of time.

OpenID retrofitting and can I trust where sensitive data is involved?

I am considering adding OpenID to our customer facing admin and control panel areas...
1 - Associating OpenID's With Existing Accounts
For customers that already have accounts with us, I'm thinking they would need to login using their existing account number that we issue and then I'd have a mechanism to associate their OpenID with that account in their account management area (call it 'OpenID Manager' for the sake of argument).
In the 'OpenID Manager', presuming the user already has an OpenID, would I authenticate the user against their OpenID then associate with our generated account number for future OpenID logins (assuming that they authenticated ok)?
2 - Sensitive Data
Although we don't store full credit card data in our DB there is other data that is sensitive, invoices, domain reg details etc. After reading this article http://idcorner.org/2007/08/22/the-problems-with-openid/ I'm a little cautious about the idea of using OpenID in this way, what's the general consensus with you folks?
It seems to me that a lot of the arguments against OpenID are either made out of ignorance or by people with an axe to grind.
For example, the document you link to complains that identifying yourself with a URI is "dehumanising and more than a little frightening". Is that a legitimate complaint, or something written by somebody desperate to find things to complain about?
The two major things that get brought up are phishing and compromised accounts and these arguments have been rehashed so many times, it's hard to take somebody seriously if they bring them up yet again with no new points to make.
Phishing protection depends on the provider. Some providers offer much better security than typical websites ever would. Some providers just offer the typical username and password. Either way, if an account is compromised, that's something between the user and their provider, it's not your concern. You don't worry that the end-user has a keylogger installed on their computer, do you? That's because their local security isn't your responsibility, even though it might be used to gain access to their account. Likewise with OpenID - its security is not your responsibility.
If you compromise an OpenID, it gives you access to more than a single website. Sure, but the same is true for email. Just say you've forgotten your password, and you get sent a new one. You now have access to every account they've registered with that email address.
OpenID is no worse than the status quo, and it's significantly better in many circumstances, especially for informed users. If you are still wary of it, then just make it optional, so only the informed users use it.
I'd allow the registration of multiple OpenIDs with a particular account. That's a nice feature to have because it allows users to migrate between OpenIDs should the need ever arise.
That said, the idcorner link raises a good point. I think he massively overblows the security issue and makes many idiotic assumptions about how OpenID providers work, but that OpenID really isn't intended to replace all forms of user authentication. It's designed to make it easy for "drive-by" users to interact with a site with some form of basic authentication.
Ever been to somebody's blog, want to post a comment, but first you have to step through a 3-page registration? OpenID solves that problem.
Want to post a quick bug report on a public tracker but need an account first? OpenID to the rescue.
Want to store sensitive proprietary data in a web-accessible way and provide access only to people who are trusted? OpenID is not the solution.

Does it make sense to set up a trusted relationship between Active Directory instances at partner companies?

If a company often requires users to be created in a partner's active directory, and vice versa, does it make sense to set up a federated / trusted relationship between the AD instances? If so, what should be considered? Does the ACL for users in the partner AD still work the same way? What security risks does this expose?
Thanks!
KA
Update:
I've learned that there's a better way to do this by having the application itself check user stores. The best way to do this is by moving the application into a domain trusted by both user stores. I've provided more detail in my answer below.
I've been researching this a bit more, and I've found a good solution. Since both companies both need to use the same system, the system itself just needs to verify if a user exists in either of the user stores(authentication), and then to the authorization at the system level.
The idea behind giving both companies access is solid - If we are working together and didn't have a way to do this, we'd need to re-create all the users from the company without access in the connected user store. Obviously, this would be a total mess and a maintenance nightmare.
I found out that in my case, even though both ADs are on the same WAN, it's necessary to have a formal federation or trust. Thankfully, we already have a domain that's trusted between both companies, so I just have to move the applications used by the partners into this domain. After that, it's simply a matter of fully-qualifying the DNS suffix to indicate the AD being used. Application-specific ACLs then reference the desired user store.
Yeah, it makes sense if you want both to be able to authenticate people across mulitple domains. You have to put the server that has the application you're targeting in a domain trusted by every AD instance you want to use for authentication.

Resources