I have logged onto to Azure Portal and created a storage account and the container to hold the images. (container has the access type set to container).
Now I'm trying to upload an image using the C# code.(Works fine on uploading an image via portal and able to access the image.)
// actName contains the storage account name and actKey contains the Access Key for that storage account.
StorageCredentials creds = new StorageCredentials(actName, actKey);
CloudStorageAccount imageStorageAccount = new CloudStorageAccount(creds, useHttps: true);
CloudBlobClient blobClient = imageStorageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("newimages");
container.CreateIfNotExists();
It is throwing an error:
No connection could be made because the target machine actively
refused it 51.136.40.24:443
Any input on this will be helpful.
Update 19/12/2016
It required me to change the Identity from ApplicationPoolIdentity to Custom user name and account. Is there a way to work on it without changing the ApplicationPoolIdentity?
Related
I am using libraries Microsoft.Azure.Storage.Blob 11.2.3.0 and Microsoft.Azure.Storage.Common 11.2.3.0 to connect to an Azure BlobStorage from a .NET Core 3.1 application.
Users of my application are supposed to supply connection information to an Azure BlobStorage to/from where the application will deposit/retrieve data.
Initially, I had assumed allowing users to specify a connection string and a custom blob container name (as an optional override of the default) would be sufficient. I could simply stuff that connection string into the CloudStorageAccount.Parse method and get back a storage account instance to call CreateBlobCloudClient on.
Now that I'm trying to use this method to connect using a container-specific SAS (also see my other question about that), it appears that the connection string might not be the most universal way to go.
Instead, it now seems a blob container URL, plus a SAS token or an account key (and possibly an account name, thought that seems to be included in the blob container URL already) are more versatile. However, I am concerned that the next way of pointing to a blob storage that I need to support (whichever that may be) might require yet another kind of information - hence my question:
What set of "fields" do I need to support in the configuration files of my application to make sure my users can point to their BlobStorage whichever way they want, as long as they have a BlobStorage?
(Is there maybe even a standard solution or best practice recommendation by Microsoft?)
Please note that I am exclusively concerned with what to store. An arbitrarily long string? A complex object of sorts? If so, with what fields?
I am not asking how to store that configuration once I know what it must comprise. For example, this is not about securely encrypting credentials etc.
On Workaround To access the Storage account using the SAS Token you need to pass the Account Name along with the SAS Token and Blob Name if you trying to upload and You need give the permission for your SAS Token .
Microsoft recommends using Azure Active Directory (Azure AD) to authorize requests against blob and queue data if possible, instead of Shared Key. Azure AD provides superior security and ease of use over Shared Key. For more information about authorizing access to data with Azure AD, see Authorize access to Azure blobs and queues using Azure Active Directory..
Note: Based on my testes you need to pass the Storage Account Name And SAS Token and the Container Name And Blob name
Example: I tried with uploading file to container using container level SAS Token . able to upload the file successfully.
const string sasToken = "SAS Token";
StorageCredentials storageCredentials = new StorageCredentials(sasToken);
const string accountName = "teststorage65";//Account Name
const string blobContainerName = "test";
const string blobName = "test.txt";
const string myFileLocation = #"Local Path ";
var storageAccount = new CloudStorageAccount(storageCredentials, accountName, null, true);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference(blobContainerName);
//blobContainer.CreateIfNotExists();
CloudBlockBlob cloudBlob = blobContainer.GetBlockBlobReference(blobName);
cloudBlob.UploadFromFile(myFileLocation);
As you already know You can use the Storage connection string to connect with Storage.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("Connection string");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("test");
Your application needs to access the connection string at runtime to authorize requests made to Azure Storage.
You have several options for storing your connection string Or SAS Token
1) You can store your connection string in an environment variable.
2) An application running on the desktop or on a device can store the connection string in an app.config or web.config file. Add the connection string to the AppSettings section in these files.
3) An application running in an Azure cloud service can store the connection string in the Azure service configuration schema (.cscfg) file. Add the connection string to the ConfigurationSettings section of the service configuration file.
Reference: https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string
Is there any difference between azure storage of local environment and with online storage.
We have created a Azure local storage using storage emulator. Refer the below link.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-emulator
https://medium.com/oneforall-undergrad-software-engineering/setting-up-the-azure-storage-emulator-environment-on-windows-5f20d07d3a04
But, we are unable to access the files for (read files) Azure local storage. Refer the below code.
const string accountName = "devstoreaccount1";// Provide the account name
const string key = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";// Provide the account key
var storageCredentials = new StorageCredentials(accountName, key);
var cloudStorageAccount = new CloudStorageAccount(storageCredentials, true);
// Connect to the blob storage
CloudBlobClient serviceClient = cloudStorageAccount.CreateCloudBlobClient();
// Connect to the blob container
CloudBlobContainer container = serviceClient.GetContainerReference(**"container name"**);
container.SetPermissionsAsync(new
BlobContainerPermissions
{
PublicAccess =
BlobContainerPublicAccessType.Blob
});
// Connect to the blob file
CloudBlockBlob blob = container.GetBlockBlobReference("sample.txt");
blob.DownloadToFileAsync("sample.txt", System.IO.FileMode.Create);
// Get the blob file as text
string contents = blob.DownloadTextAsync().Result;
The above code works correctly for reading the files in online Azure storage. Anyone suggest how to resolve the issue in reading the files in local Azure storage.
The document explains clearly about differences between the Storage Emulator and Azure Storage.
If you would like to access the local storage, you could call this API. For more details about URI, see here.
Get http://<local-machine-address>:<port>/<account-name>/<resource-path>
We are currently migrating to a new Azure Subscription and are having issues executing Azure Functions that worked as expected in our old Azure Subscription. The man difference between our old Subscription and our new Subscription is that we have set up a Virtual Network with Subnets and have deployed our Resources behind the Subnets.
We have also had to migrate from an Azure App Service in the old Subscription to a Azure App Environment in the new Subscription.
Our Azure environment consist of:
App Service Environment
App Service Plan I1
The Azure App Environment and Storage Containers are on the same Virtual Network but different Sub Nets. The Function is using a Managed Identity which has Owner Role on Storage Account.
The code listed below worked just fine in our old environment which did not contain the Virtual Network, but fails in our new environment.
Any guidance would be greatly appreciated.
The Azure function which connects to Azure Storage works when run locally from Visual Studio 2019, but fails when run from Azure portal.
Code Snippet below:
This section works just fine:
string storageConnectionString = XXXXConn.ConnectionETLFileContainer();//Get Storage connection string
var myDirectory = "XXXX/Uploads"; ///XXXX-etl-file-ingest/ABSS/Uploads/ CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageConnectionString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();// Create a CloudBlobClient object for credentialed access to Azure Blob. CloudBlobContainer blobContainer = blobClient.GetContainerReference("XXXX-etl-blobfile-ingest");// Get a reference to the Blob Container we created previously. CloudBlobDirectory blobDirectory = blobContainer.GetDirectoryReference(myDirectory);// Get a reference to the Blob Directory.
var blobs = blobDirectory.ListBlobs(useFlatBlobListing: true); //set useFlatBlobListing as true
This statement fails: Failure occurs when trying to iterate through the Blob files and get specific file info.
foreach (var myblob in blobs)
In the azure portal open storage account blade under that go to configuration blade , you will be able to see the list of networks for which your storage account has allowed access to.Once you have the allowed network list kindly check if the function app is on one of those networks if not then you need to get the network on which your function app is hosted added to the list.
Update 2:
The simplest explanation/cause that I found is when an App Service or Function App has the setting WEBSITE_VNET_ROUTE_ALL set to 1, all traffic to public endpoints is blocked. So if your Storage Account has no private endpoint configured, requests to it will fail.
Docs: "To block traffic to public addresses, you must have the application setting WEBSITE_VNET_ROUTE_ALL set to 1."
https://learn.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet#network-security-groups
Update 1:
My answer below was only a workaround for my problem. Turns out I did not link the Private DNS Zone (this is created for you when you create a new Private Endpoint) to my VNET.
To do this, go to your Private DNS Zone in the Azure Portal and click on Virtual network links in the left menu bar. There add a new link to the VNET your Function is integrated in.
This may not have been relevant for the OP, but hopefully it will help others.
Original answer:
In my case this was solved by enabling the Microsoft.Storage Service Endpoint on the App Service's subnet (dedicated subnet).
I have a container in Azure blob storage with the Access Policy set to "Blob". The container has existing blobs that I would like to protect with a Shared Access Policy.
I noticed if I create a container shared access policy...
var sharedPolicy = new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(120),
Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write
};
permissions.SharedAccessPolicies.Add(Guid.NewGuid().ToString(), sharedPolicy);
_blobContainer.SetPermissions(permissions);
I am still able to read the existing blobs. I expected the existing blobs to be "protected" by the newly created SAP.
How can I apply a SAP to an existing blob?
Can all SAP's be removed from a container to expose all the blobs publicly again (assuming you know the url)? Or does removing the SAP make the blobs inaccessible somehow?
If I am using SAP's to protect the blobs, can I set the container's Access Policy to "private" and have it still work?
Thanks!
I tinkered with this for a while. Here is what I observed...
Access Policy = Blob
Anyone that knows the URI to the blob can read the blob (regardless of SAP's).
Access Policy = Private
To read you must...
Connect with the storage access key
OR have a valid Shared Access Signature
OR have a valid Stored Access Policy (on the container)
There is no sub container(directory) in Azure blob storage. I simply use slashes in file name for having virtual sub directory.
here is example.
http://apolyonstorage.blob.core.windows.net/banners/Local/Homepage/index.html
Container Name: banners
File Name : Local/Homepage/index.html
I upload the file and access it on Azure portal but simply accessing the link fail by telling resource is not found but it is actually exist.
Why it fails when i access it on browser by link ?
What's the ACL on the blob container? For a blob to be accessible directly via URL (or in other words anonymous access, blob container's ACL should be either Blob or Public.
You can use the sample code below to change the container's ACL:
CloudStorageAccount account = new CloudStorageAccount(new StorageCredentials(StorageAccount, StorageAccountKey), true);
CloudBlobClient blobClient = account.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("your-container-name");
BlobContainerPermissions permissions = new BlobContainerPermissions()
{
PublicAccess = BlobContainerPublicAccessType.Blob,//Or BlobContainerPublicAccessType.Container
};
container.SetPermissions(permissions);