Working in IaaS environment in AZURE and need to create a shared file for applications that will be sharing the same files uploaded by end users. The file share needs to be scene on various servers and appear as a fixed drive letter or mount point. Already created a Storage account and a file share in azure but can not overcome the issue that the mapped drive is associated with a users profile.
Was wondering if any has come up with a solution. ... I'm the system administrator assigned to this task and can do things in powershell or pass code information to developers for their review.
Did not resolve issue, developers are going to use Blog storage.
The trick with this was getting the application to see the drive letter. For us having a local user run as a service with the associated Azure file share mapping might have worked
NOTE to map the azure drive a use would need the Azure Storage account and Key generated for that account to access it.
Related
I have Azure Pay-As-You-Go subscription account having Azure Storage general purpose V1 Service where I store files. I wondered yesterday, when I found another storage account with different location which I haven't created. For details screen shot is given:
If you have any knowledge about it or faced same behavior on Azure Storage, please guide and share your experience as I want to know what it is for and why it has been created on different location as my other services are in different resource group on Notrh Europe Location.
please guide and share your experience as I want to know what it is
for and why it has been created on different location as my other
services are in different resource group on Notrh Europe Location.
When you use Azure cloud shell, on the initial start, Cloud Shell prompts you to associate a new or existing file share to persist files across sessions.
When you use basic settings and select only a subscription, Cloud
Shell creates three resources on your behalf in the supported region
that's nearest to you. The auto-generated storage account always names cs<uniqueGuid>, read here.
Also, Azure creates a disk image of your $Home directory to persist all contents within the directory. The disk image is saved in your specified file share as acc_<User>.img
at fileshare.storage.windows.net/fileshare/.cloudconsole/acc_<User>.img, and it automatically syncs changes.
About the region, it depended on the region when you select the associated Azure storage account when initially start with Cloud shell. Associated Azure storage accounts must reside in the same region as the Cloud Shell machine that you're mounting them to. Its region is totally not related to your other Azure resource group. You also could run clouddrive unmount to re-select an associated storage account for the Azure file share.
To find your current region you may run env in Bash and locate the variable ACC_LOCATION.
We had try Microsoft's Azure platform for our startup.
A developer created servers and storage among other things in the account.
I have with the same login account a One Drive account as well with personal stuff.
I have stopped the servers and I want to delete the storage at Azure, is it safe to delete it without deleting my storage on One Drive?
Are they separated? So I can delete the Azure's storage without deleting the things I have on one Drive?
Best Regards,
Daniel
Azure storage and One Drive are totally different services. Azure is a paid, commercial, storage solution and any changes you make there won't affect your One Drive (which is a targeted at personal use and free, i think)
Our team has Windows Azure MSDN - Visual Studio Premium subscriptions for all our devs. I have been taking advantage of the $100 per month allowance and am building more infrastructure in the cloud.
However, I would like other members of our team to access certain of the assets. I am quite new to the Azure infrastructure, so this might be a dumb question. But can they access my blobs? and can I control exactly who can access my blobs?
They can obviously RDP into my VMs, that's not an issue. I assume they can hit my VMs too, via the IP address, inside Azure, etc. However, I am more interested in the Blobs. Mostly because I am starting to upload a lot of utility data (large sample datasets, common software we all install, etc.) and I would like to avoid all of us having to upload all of it again for each subscriptions.
As of today (11/8/2013), you cannot "pool" MSDN resources meaning..have 4 subscriptions add up to $400/month and do ala carte cloud services
You can have one admin/or several for multiple subscriptions, this will allow you to view the different subscriptions in the portal and manage them in a single spot
You can also have different deployment profiles, so one Visual Studio instance can deploy to different Azure accounts.
Specific to your question, you have blob access keys and if you share the name of the storage account and key...yes they can access your data located there.
Yes, it is possible to control access to your blobs by using SAS (Shared Access Signatures)
SAS grants granular access to container, blob, table, & queue
This should be a good resource to start with :
Manage Access to Windows Azure Storage Resources
Create and Use a Shared Access Signature
However, I would like other members of our team to access certain of
the assets. I am quite new to the Azure infrastructure, so this might
be a dumb question. But can they access my blobs? and can I control
exactly who can access my blobs?
To answer specifically this question, Yes your team members can access the data stored in any blob storage account in any of your subscription. There are two ways by which you can provide them access to blob storage:
By giving them account name/account key: Using this, they get full access to storage account and essentially become owners of that storage account.
By using Shared Access Signature: If you want to give them restricted access to blob storage, you would need to use SAS as described by Dan Dinu. SAS basically gives you a URL using which users in possession of that URL can explore storage (by writing some code), however it is not possible to identify which user accessed which storage. For that you would need to write something on your own.
I have a number of executables that I want to migrate to azure. These executables require the I/O API in order to function properly. So in order to be able to migrate these executables I have to place them inside a cloud drive (VHD). But as far as I know a vhd can be mounted by one role at a time so how can I manage for let’s say 2 or more roles to mount the same VHD?
My first thought is to upload copies of the original VHD so each role can mount its own vhd and read/write to it and then save the files that all roles need to see in blob storage.
Make the VHD sharable via Using SMB to Share a Windows Azure Drive.
Which of the two solutions is better and is there another way I can deal with this issue?
Here's a link to an open source project that bridges System.IO calls to an ASP.NET cloud storage provider.
currently I'm playing around with Azure and thinking about a multi-tanent web app where users can create an instance of the app, where more users can register to upload and share files within this instance. I've created a blob storage service and created several containers. However, I'm not sure how customers may think about the fact, that they share their blob service with other users and files are only separated by containers. I would like that each user gets instead his own blob service. However the web app should be shared still by a single web worker role.
This sounds easy for every instance you create by hand, however I want the blob service to be created automatically as the user registers and creates his instance of the web app. Unfortunately I haven't found yet any information about how I could accomplish this. I've found only the blob storage api to query the service, not for creating it.
Can anybody lead me in the right direction? Is this even possible?
You can create a storage account programmatically (see "Create Storage Account": http://msdn.microsoft.com/en-us/library/hh264518.aspx), but I wouldn't recommend creating a different account for each user. The limit on how many storage accounts can be created per subscription is fairly low. (I believe the default is five and you can call to get your quota increased to twenty.)
In general, the recommendation is to go ahead and use the same storage account for all your customers. I believe your concern is about data security, but adding multiple storage accounts doesn't really change the security dynamic. (The trust boundary is still between you and the end user, since only your code will directly access storage.)