I am using "Azure Storage File Shares" to store some files from our website, but failed with error message "The specified share already exists".
I have change the file that being upload, but the error persist.
Here my code
public static void Test2Upload()
{
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
string connectionString = "DefaultEndpointsProtocol=https;AccountName=xxxxx;AccountKey=xxxxx;EndpointSuffix=core.windows.net";
string shareName = "myapp-dev";
string dirName = "files";
string fileName = "catto.jpg";
// Path to the local file to upload
string localFilePath = #"d:\temp\two.jpg";
// Get a reference to a share and then create it
ShareClient share = new ShareClient(connectionString, shareName);
share.Create();
// Get a reference to a directory and create it
ShareDirectoryClient directory = share.GetDirectoryClient(dirName);
directory.Create();
// Get a reference to a file and upload it
ShareFileClient file = directory.GetFileClient(fileName);
using (FileStream stream = File.OpenRead(localFilePath))
{
file.Create(stream.Length);
file.UploadRange(
new HttpRange(0, stream.Length),
stream);
}
}
Looks like I should not create ShareClient with same name several times.
Then how to check and use it?
The most important question is, why the file still not yet uploaded (even if I rename the ShareClient object)?
Looks like I should not create ShareClient with same name several
times. Then how to check and use it?
You can use ShareClient.CreateIfNotExists instead of ShareClient.Create method. Former will try to create a share but if a share already exists, then it won't be changed.
You can also use ShareClient.Exists to check if the share exists and then create it using ShareClient.Create if it does not exist. This is not recommended however as it might not work if multiple users are executing that code at the same time. Furthermore, you will be making 2 network calls - first to check the existence of share and then the second to create it.
The most important question is, why the file still not yet uploaded
(even if I rename the ShareClient object)?
Your code for uploading the file looks ok to me. Are you getting any error in that code?
We could use ShareClient.CreateIfNotExists when creating ShareClient object to avoid the problem. Like below
ShareClient share = new ShareClient(connectionString, shareName);
share.CreateIfNotExists();
You might found Similar problem on ShareDirectoryClient.
This part purpose is to create the folder structure.
The upload will fail if the destination folder is not exist.
Error will occur if we create a folder when it already exist.
So, use method ShareDirectoryClient.CreateIfNotExists, like below
ShareDirectoryClient directory = share.GetDirectoryClient(dirName);
directory.CreateIfNotExists();
Here my complete code
public static void TestUpload()
{
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
string connectionString = "DefaultEndpointsProtocol=https;AccountName=xxx;AccountKey=xx;EndpointSuffix=core.windows.net";
string shareName = "myapp-dev";
string dirName = "myfiles";
string fileName = "catto.jpg";
string localFilePath = #"d:\temp\two.jpg";
// Get a reference to a share and then create it
ShareClient share = new ShareClient(connectionString, shareName);
share.CreateIfNotExists();
// Get a reference to a directory and create it
ShareDirectoryClient directory = share.GetDirectoryClient(dirName);
directory.CreateIfNotExists();
// Get a reference to a file and upload it
ShareFileClient file = directory.GetFileClient(fileName);
using (FileStream stream = File.OpenRead(localFilePath))
{
file.Create(stream.Length);
file.UploadRange(
new HttpRange(0, stream.Length),
stream);
}
}
Related
` [FunctionName("FileShareDirRead02")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Function, "get","post", Route = null)] HttpRequest req,
ILogger log)
{
//Get the contents of the POST and store them into local variables
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
//The following variable values are being passed in to the function through HTTP POST, or via parameters specified in the data factory pipeline
string storageAccount = data.storageAccount; //Name of storage account containing the fileshare you plan to parse and remove files
string fileshare = data.fileShare; //Name of the fileshare within the storage account
string folderPath = data.folderPath; // with no leading slash, this will be considered the ROOT of the fileshare. Parsing only goes down from here, never up.
string keyVaultName = data.keyvaultName; //Name of the key valut where the storage SAS token is stored
int daysoldbeforeDeletion = data.daysoldbeforeDeletion; //Number of days old a file must be before it is deleted from the fileshare
string nameofSASToken = data.nameofsasToken; //Name of SAS token created through PowerShell prior to execution of this function
string storageAccountSAS = storageAccount + "-" + nameofSASToken; //Format of the storage account SAS name
string kvUri = "https://" + keyVaultName + ".vault.azure.net/"; //URI to the key vault
var client = new SecretClient(new Uri(kvUri), new DefaultAzureCredential()); //Instantiate an instance of a SecretClient using the key vault URI
var storageKey1 = await client.GetSecretAsync(storageAccountSAS); //Obtain SAS Token from key vault
string key = storageKey1.Value.Value; //Assign key the SIG value which is part of the SAS Token
key = key.Substring(1); //Trim the leading question mark from the key value since it is not a part of the key
string connectionString = "FileEndpoint=https://" + storageAccount + ".file.core.windows.net/;SharedAccessSignature=" + key; //Define the connection string to be used when creating a Share Client
ShareClient share = new ShareClient(connectionString,fileshare); // Instantiate a ShareClient which will be used to manipulate the file share
var folders = new List<Tuple<string, string>>(); //reference a new list 2-tuple named folders which will include our directories from our share in our Azure Storage Account
ShareDirectoryClient directory = share.GetDirectoryClient(folderPath); // Get a reference to the directory supplied in the POST
Queue<ShareDirectoryClient> remaining = new Queue<ShareDirectoryClient>(); // Track the remaining directories to walk, starting from the folder path provided in the POST
remaining.Enqueue(directory);
while (remaining.Count > 0) // Keep scanning until all folders and files have been evaluated
{
ShareDirectoryClient dir = remaining.Dequeue(); // Get all of the next directory's files and subdirectories
if (dir.GetFilesAndDirectoriesAsync() != null) //Make sure the folder path exists in the fileshare
{
//return new OkObjectResult("{\"childItems\":" + JsonConvert.SerializeObject(remaining.Count) + "}"); //Returns a list of all files which were removed from the fileshare
await foreach (ShareFileItem item in dir.GetFilesAndDirectoriesAsync()) //For each directory and file
{
if (!(item.IsDirectory)) //Make sure the item is not a directory before executing the below code
{
ShareFileClient fileClient = new ShareFileClient(connectionString, fileshare, dir.Path + "/" + item.Name); //Create the File Service Client
if (fileClient.Exists())
{
ShareFileProperties properties = await fileClient.GetPropertiesAsync(); //Get the properties of the current file
DateTime convertedtime = properties.LastModified.DateTime; //Get the last modified date and time of the current file
DateTime date = DateTime.UtcNow; //Get today's date and time
TimeSpan timeSpan = date.Subtract(convertedtime); //Subtract last modified date/time from today's date/time
int dayssincelastmodified = timeSpan.Days; //Assign the number of days between the two dates
if (dayssincelastmodified > daysoldbeforeDeletion)
{
folders.Add(new Tuple<string, string>(item.Name, fileClient.Path)); // Add the directory names and filenames to our list 2-tuple
fileClient.Delete(); //Delete the file from the share
}
}
}
if (item.IsDirectory) // Keep walking down directories
{
remaining.Enqueue(dir.GetSubdirectoryClient(item.Name));
}
}
}
}
return new OkObjectResult("{\"childItems\":" + JsonConvert.SerializeObject(folders) + "}"); //Returns a list of all files which were removed from the fileshare
}
}
}`I have written a function app using MS Visual Studio C# and published it to an azure function app. The app is very simple. It reads a directory and all subdirectories of a file share looking for files that have not been modified in the last 90 days. If so, the files are deleted. This function works fine when reading a small set of directories and files. But when I run it on a directory with say a 1000 or more files, the app crashes with a 503 error saying the service is not available and to check back later. I am using an App Service Plan, Standard. I thought maybe it was timing out but this type of plan is not supposed to prevent an app from running, no matter how long it runs. To be sure, I put a line in my host.json file "functionTimeout": "01:00:00" to make sure that was not the problem. I cannot find a single log entry that explains what is happening. Any ideas on how to debug this issue?
This problem is often caused by application-level issues, such as:
requests taking a long time
application using high memory/CPU
application crashing due to an exception.
Seems like your function is taking more time to return an HTTP response. As mentioned in the documentation, 230 seconds is the maximum amount of time that an HTTP triggered function can take to respond to a request. Please refer to this.
Also, It is also possible to prevent the autorun of the Azure function by specifying the proper value for the parameter functionTimeout of the host.json file and where you set the functionTimeout to 1 hour.
To debug this issue, refer this azure-app-service-troubleshoot-http-502-http-503 MSFT documentation and follow the troubleshooting steps provided.
For longer processing times, use Azure Durable Functions async pattern. Refer this MS Doc.
I am using Azure Function App
I am using CSVHelper package to create file, But CSVHelper needs local file path first to Create/Write file.
using (var writer = new StreamWriter(filePath))
using (var csvData = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
// Write input in csv
csvData.WriteRecords(input);
}
What path can I use to create file in Azure Function App?
Since it looks like you're using a StreamWriter, you could also write to a MemoryStream instead of creating an actual file. This feels like a better route to take with Azure Functions.
If you're really set on creating an actual file, you can do so by using System.IO.Path.GetTempPath(), which will always return a valid path for any given system. Create your temporary file there, then continue with the process.
Please take into account that your Function might run multiple times on the same environment, so be sure to use a unique filename.
For future reference:
private static void ExportContentToCsv(ILogger log, IEnumerable<T> content)
{
var path = Path.Combine(Path.GetTempPath(), "content.csv");
log.LogInformation($"Writing csv file at {path}");
if (File.Exists(path))
{
log.LogInformation("Deleting existent resources...");
File.Delete(path);
}
using (var writer = new StreamWriter(path))
{
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
csv.WriteRecords(content);
}
}
}
I am trying to upload a file, I get an exception "The specified resource name contains invalid characters".
The path I am using is #"C:\Test\Test.txt". When I change to relative addressing (i.e., #".\Test.txt") and have the file in the exe folder, it will work.
What I need to know is relative addressing the only option to upload a file to Azure File Storage from a .NET client? Is there a way to reference a file with a full path and upload to File Storage?
Update: Based on the comments and answer below, I realized my mistake: I was supplying the incoming file path to the GetFileReference method, where this should be the name of the new file in Azure, hence it contained the ':' which was invalid. Comments are right, I should have provided code, may have been diagnosed easier.
public static async Task WriteFileToStorage(string filePath)
{
CloudFileShare fileShare = GetCloudFileShare();
CloudFileDirectory fileDirectory = fileShare.GetRootDirectoryReference();
CloudFile cloudFile = fileDirectory.GetFileReference(filePath);
await cloudFile.UploadFromFileAsync(filePath);
}
.Net client does support the full path when upload to azure file storage.
You'd better provide the complete code you're using including the file name / path in local and azure file storage.
Here is the code I test with, and it works.(and I'm using the package WindowsAzure.Storage, version 9.3.3 ):
static void Main(string[] args)
{
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials("account_name", "account_key"), true);
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare fileShare = fileClient.GetShareReference("test");
CloudFileDirectory rootDir = fileShare.GetRootDirectoryReference();
CloudFile myfile = rootDir.GetFileReference("mytest.txt");
//the full file path on local
myfile.UploadFromFile(#"C:\Test\Test.txt");
}
I'm trying to convert current application that uses NPOI for creating xls document on the server to Azure hosted application. I have little experience with NPOI and Azure so 2 strikes right there. I have the app uploading the xls to Blob container however it is always blank (9 bytes). From what I understand NPOI uses filestream to write to the file so I just changed that to write to the blob container.
Here is what i think are the relevant portions:
internal void GenerateExcel(DataSet ds, int QuoteID, string ReportFileName)
{
string ExcelFileName = string.Format("{0}_{1}.xls",ReportFileName,QuoteID);
try
{
//these 2 strings will get deleted but left here for now to run side by side at the moment
string ReportDirectoryPath = HttpContext.Current.Server.MapPath(".") + "\\Reports";
if (!Directory.Exists(ReportDirectoryPath))
{
Directory.CreateDirectory(ReportDirectoryPath);
}
string ExcelReportFullPath = ReportDirectoryPath + "\\" + ExcelFileName;
if (File.Exists(ExcelReportFullPath))
{
File.Delete(ExcelReportFullPath);
}
// Create a new workbook.
var workbook = new HSSFWorkbook();
//Rest of the NPOI XLS rows cells etc. etc. all works fine when writing to disk////////////////
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference("pricingappreports");
// Create the container if it doesn't already exist.
if (container.CreateIfNotExists())
{
container.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}
// Retrieve reference to a blob with the same name.
CloudBlockBlob blockBlob = container.GetBlockBlobReference(ExcelFileName);
// Write the output to a file on the server
String file = ExcelReportFullPath;
using (FileStream fs = new FileStream(file, FileMode.Create))
{
workbook.Write(fs);
fs.Close();
}
// Write the output to a file on Azure Storage
String Blobfile = ExcelFileName;
using (FileStream fs = new FileStream(Blobfile, FileMode.Create))
{
workbook.Write(fs);
blockBlob.UploadFromStream(fs);
fs.Close();
}
}
I'm uploading to the Blob and the file exists, why doesn't the data get written to the xls?
Any help would be appreciated.
Update: I think I found the problem. Doesn't look like you can write to a file in Blob Storage. Found this Blog which pretty much answers my questions: it doesn't use NPOI but the concept is the same. http://debugmode.net/2011/08/28/creating-and-updating-excel-file-in-windows-azure-web-role-using-open-xml-sdk/
Thanks
Can you install fiddler and check the request and the response packets? You may also need to seek back to 0 between two writes . So the correct code here could be to add the below before trying to write the stream to blob.
workbook.Write(fs);
fs.Seek(0, SeekOrigin.Begin);
blockBlob.UploadFromStream(fs);
fs.Close();
I also noticed that you are using String Blobfile = ExcelFileName instead of String Blobfile = ExcelReportFullPath.
I'm creating a windows store app to read write files.
I gave permissions for document library but still getting this error
App manifest declares document library access capability without specifying at least one file type association
The code snippet of my code:
private async void Button_Click_1(object sender, RoutedEventArgs e)
{
String temp = Month.SelectedValue.ToString() + "/" + Day.SelectedValue.ToString() + "/" + Year.SelectedValue.ToString(); //((ComboBoxItem)Month.SelectedItem).Content.ToString();
DateTime date = Convert.ToDateTime(temp);
Windows.Storage.StorageFolder installedLocation = Windows.ApplicationModel.Package.Current.InstalledLocation;
StorageFolder storageFolder = KnownFolders.DocumentsLibrary;
StorageFile sampleFile = await storageFolder.CreateFileAsync("sample.txt");
var buffer = Windows.Security.Cryptography.CryptographicBuffer.ConvertStringToBinary(temp, Windows.Security.Cryptography.BinaryStringEncoding.Utf8);
await Windows.Storage.FileIO.WriteBufferAsync(sampleFile, buffer);
buffer = await Windows.Storage.FileIO.ReadBufferAsync(sampleFile);
}
Any other better approach is also acceptable.
1.I don't have access to Skydrive. 2.Also don't want to use filepicker
You need to specify file type association also.
From: http://msdn.microsoft.com/en-us/library/windows/apps/hh967755.aspx
Documents Library:
Note You must add File Type Associations to your app manifest that declare specific file types that your app can access in this location.
you can find it in the app manifest, when you check documents library in capabilites, you have to fill in atleast one file type association under Declarations tab.
Found exactly what i was looking for. File type associations was to be added in the app.manifest
For those who facing the same problem check this link