I have a simple .Net Core MVC web application that I deploy to Azure. In the application, I am creating a little text file called "test.txt" using File.CreateText(). This works fine on my local PC, but when I deploy it to Azure, I get a strange message: "Could not find file 'D:\home\site\wwwroot\wwwroot\test.txt'."
Indeed, the file does not exist--that's why I'm creating it. Also, it appears someone else on SO is having a similar problem: FileNotFoundException when using System.IO.Directory.CreateDirectory()
Do I not have write permissions? Why is Azure not letting me create the file?
Code (in Startup.cs):
public void Configure(IApplicationBuilder app, IHostingEnvironment env){
using (var sw = File.CreateText(env.WebRootPath + "/test.txt")) { }
}
Screenshot of error:
I found the reason for this error shortly after posting this question, but forgot to post the answer:
It turns out that when you deploy to Azure, your site runs from a temporary directory, which gets wiped and re-created every time you deploy. Because of this, Azure disables the creation and editing of files in the app's directory, since they're just going to get wiped upon your next deployment (it would be nice if the error message Azure displayed was a little more informative).
Therefore, the way to create and edit files on your Azure app and have them persist across deployments is to use one of the following:
Database BLOB storage (simply store your files as byte arrays in a database. doesn't cost any money if you are already using a database)
Azure BLOB storage (store your files to a special cloud on Azure. costs money)
I read this on some site shortly after posting this question but I can't remember where, so I apologize for not having the source.
The best way is use the blog storage AZURE to do this. But you can try save in the root this way:
static void Main(string[] args)
{
// Create a string with a line of text.
string text = "First line" + Environment.NewLine;
// Set a variable to the Documents path.
string docPath = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
// Write the text to a new file named "WriteFile.txt".
File.WriteAllText(Path.Combine(docPath, "WriteFile.txt"), text);
// Create a string array with the additional lines of text
string[] lines = { "New line 1", "New line 2" };
// Append new lines of text to the file
File.AppendAllLines(Path.Combine(docPath, "WriteFile.txt"), lines);
}
Reference: https://learn.microsoft.com/pt-br/dotnet/standard/io/how-to-write-text-to-a-file
Related
I have a webjob that I have set up to be triggered when a file is added to a directory:
[FileTrigger(#"<DIR>\<dir>\{name}", "*", WatcherChangeTypes.Created, autoDelete: true)] Stream file,
I have it configured:
var config = new JobHostConfiguration
{
JobActivator = new NinjectActivator(kernel)
};
var filesConfig = new FilesConfiguration();
#if DEBUG
filesConfig.RootPath = #"C:\Temp\";
#endif
config.UseFiles(filesConfig);
config.UseCore();
The path is for working locally and I was expecting that commenting out the FilesConfiguration object leaving it default would allow it to pick up the connection string I have set up and trigger when files are added. This does not happen it turns out that by default the RootPath is set to "D:\Home" and produces an InvalidOperationException
System.InvalidOperationException : Path 'D:\home\data\<DIR>\<dir>' does not exist.
How do I get the trigger to point at the File storage area of the storage account I have set up for it. I have tried removing the FilesConfiguration completely from Program.cs in the hope that it would work against the settings but it only produces the same Exception.
System.InvalidOperationException : Path 'D:\home\data\\' does not exist.
When you publish to azure, the default directory is D:\HOME\DATA, so when you run webjob it could not find the path so you get the error message.
How do I get the trigger to point at the File storage area of the storage account I have set up for it.
The connectionstring you have set have two applies: one is used for dashboard logging and the other is used for application functionality (queues, tables, blobs).
It seems that you could not get filetrigger working with azure file storage.
So, if you want to invoke your filetrigger when you create new file, you could go to D:\home\data\ in KUDU to create a DIR folder and then create new .txt file in it.
The output is as below:
BTW, it seems that you'd better not use autoDelete when you create file, if use you will get error like:
NotSupportedException: Use of AutoDelete is not supported when using change type 'Changed'.
Problem:
trying to get an image out of azure fileshare for manipulation. I need to read the file as an Drawing.Image for manipulation. I cannot create a valid FileInfo object or Image using uncpath (which I need to do in order to use over IIS)
Current Setup:
Attach a virtual directory called Photos in IIS website pointing to UNCPath of the Azure file share (e.g. \myshare.file.core.windows.net\sharename\pathtoimages)
This works as http://example.com/photos/img.jpg so I know it is not a permissions or authentication issue.
For some reason though I cannot get a reference to File.
var imgpath = Path.Combine(Server.MapPath("~/Photos"),"img.jpg")
\\resolves as \\myshare.file.core.windows.net\sharename\pathtoimages\img.jpg
var fi = new FileInto(imgpath);
if(fi.exists) //this returns false 100% of the time
var img = System.Drawing.Image.FromFile(fi.FullName);
The problem is that the file is never found to exist, even though I cant take that path and put it in an explorer window and return the img.jpg 100% of the time.
Does anyone have any idea why this would not be working?
Do I need to be using CloudFileShare object to just get a read of a file I know is there?
It turns out the issue is that I needed to wrap my code in an impersonation of the azure file share userid since the virtual directory is not really in play at all at this point.
using (new impersonation("UserName","azure","azure pass"))
{
//my IO.File code
}
I used this guys impersonation script found here.
Can you explain why DirectoryInfo.GetFiles produces this IOException?
I'm trying to work with Azure Webjobs, I understand the way its works but I don't understand why I need to use two connection strings, one is for the queue for holding the messages but
why there is another one called "AzureWebJobsDashboard" ?
What its purpose?
And where I get this connection string from ?
At the moment I have one Web App and one Webjob at the same solution, I'm experiment only locally ( without publishing anything ), the one thing I got up in the cloud is the Storage account that holds the queue.
I even try to put the same connection string in both places ( AzureWebJobsDashboard,AzureWebJobsStorage) but its throw exception :
"Cannot bind parameter 'log' when using this trigger."
Thank you.
There are two connection strings because the WebJobs SDK writes some logs in the storage account. It gives you the possibility of having one storage account just for data (AzureWebJobsStorage) and the another one for logs (AzureWebJobsDashboard). They can be the same. Also, you need two of them because you can have multiple job hosts using different data accounts but sending logs to the same dashboard.
The error you are getting is not related to the connection strings but to one of the functions in your code. One of them has a log parameter that is not of the right type. Can you share the code?
Okay, anyone coming here looking for an actual answer of "where do I get the ConnectionString from"... here you go.
On the new Azure portal, you should have a Storage Account resource; mine starts with "portalvhds" followed by a bunch of alphanumerics. Click that so see a resource Dashboard on the right, followed immediately by a Settings window. Look for the Keys submenu under General -- click that. The whole connection string is there (actually there are two, Primary and Secondary; I don't currently understand the difference, but let's go with Primary, shall we?).
Copy and paste that in your App.config file on the connectionString attribute of the AzureWebJobsDashboard and AzureWebJobsStorage items. This presumes for your environment you only have one Storage Account, and so you want that same storage to be used for data and logs.
I tried this, and at least the WebJob ran without throwing an error.
#RayHAz - Expanding upon your above answer (thanks)...
I tried this https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-get-started
but in .Net Core 2.1, was getting exceptions about how it couldn't find the connection string.
Long story short, I ended up with the following, which worked for me:
appsettings.json, in a .Net Core 2.1 Console app:
{
"ConnectionStrings": {
"AzureWebJobsStorage": "---your Azure storage connection string here---",
"AzureWebJobsDashboard":"---the same connectionstring---"
}
}
... and my Program.cs file...
using System;
using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
namespace YourWebJobConsoleAppProjectNamespaceHere
{
public class Program
{
public static IConfiguration Configuration;
static void Main(string[] args)
{
var builder = new ConfigurationBuilder()
.SetBasePath(Path.Combine(AppContext.BaseDirectory))
.AddJsonFile("appsettings.json", true);
Configuration = builder.Build();
var azureWebJobsStorageConnectionString = Configuration.GetConnectionString("AzureWebJobsStorage");
var azureWebJobsDashboardConnectionString = Configuration.GetConnectionString("AzureWebJobsDashboard");
var config = new JobHostConfiguration
{
DashboardConnectionString = azureWebJobsDashboardConnectionString,
StorageConnectionString = azureWebJobsStorageConnectionString
};
var loggerFactory = new LoggerFactory();
config.LoggerFactory = loggerFactory.AddConsole();
var host = new JobHost(config);
host.RunAndBlock();
}
}
}
I use local storage on Windows Azure to store temporary files. In there I call an .exe file to make a conversion of several other files in same local storage folder. Problem is I always get the exception "Access to the path XYZ.exe is denied.".
I should mention the following:
- I am using a worker role
- set in the service definition file
and tried to add permission to the folder I am accessing:
public static void AddPermission(string absoluteFolderPath)
{
DirectoryInfo myDirectoryInfo = new DirectoryInfo(absoluteFolderPath);
DirectorySecurity myDirectorySecurity = myDirectoryInfo.GetAccessControl();
myDirectorySecurity.AddAccessRule(new FileSystemAccessRule(
"NETWORK SERVICE",
FileSystemRights.FullControl,
AccessControlType.Allow));
myDirectoryInfo.SetAccessControl(myDirectorySecurity);
}
UPDATE:
I tried with this code now:
public static void FixPermissions()
{
var tempDirectory = RoleEnvironment.GetLocalResource("localStorage").RootPath;
Helper.addPermission(tempDirectory);
var dir = new DirectoryInfo(tempDirectory);
foreach (var d in dir.GetDirectories())
Helper.addPermission(d.FullName);
}
private static void addPermission(string path)
{
FileSystemAccessRule everyoneFileSystemAccessRule = new FileSystemAccessRule("Everyone",
FileSystemRights.FullControl,
InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit,
PropagationFlags.None, AccessControlType.Allow);
DirectoryInfo directoryInfo = new DirectoryInfo(path);
DirectorySecurity directorySecurity = directoryInfo.GetAccessControl();
directorySecurity.AddAccessRule(everyoneFileSystemAccessRule);
directoryInfo.SetAccessControl(directorySecurity);
}
I get a really strange behaviour of the page. I still get the errors but sometimes some files gets converted by the ffmpeg.exe file.
Can someone help me out here??
Thanks a lot.
SOLUTION:
So seems the problem was that I ran the .exe file within local storage and therefore had the given security issues. Putting the .exe into the application and referring directly solved my issue.
Thx for your help.
By default your worker role will most likely not be running with sufficient privilege to allow changes to the access control lists on Azure folders.
There's two possible options:
Best: run a script at startup to set the permissions. Details are on MSDN here: http://msdn.microsoft.com/en-us/library/gg456327.aspx. You'll want to set executionContext="elevated".
The best way to write the script itself is through Powershell. An example is here: http://weblogs.thinktecture.com/cweyer/2011/01/fixing-windows-azure-sdk-13-full-iis-diagnostics-and-tracing-bug-with-a-startup-task-a-grain-of-salt.html. Alternatively, write a console application to do the same thing.
Easiest, but much less secure: set the security in your OnStart method, and run your whole worker role elevated: in your service definition file include
<WebRole name="WebApplication2">
<Runtime executionContext="elevated" />
<Sites>
However, I'd really not recommend that as it's a terrible security hole for something that's running in the public cloud.
I am trying to run a C executable on Azure. I have many workerRoles and they continuously check a Job Queue. If there is a job in the queue, a worker role runs an instance of the C executable as a process according to the command line arguments stored in a job class. The C executable creates some log files normally. I do not know how to access those created files. What is the logic behind it? Where are the created files stored? Can anyone explain me? I am new to azure and C#.
One other problem is that all of the working instances of the C executable need to read a data file. How can I distribute that required file?
First, realize that in Windows Azure, your worker role is simply running inside a Windows 2008 Server environment (either SP2 or R2). When you deploy your app, you would deploy your C executable as well (or grab it from blob storage, but that's a bit more advanced). To find out where your app lives on disk, call Environment.GetEnvironmentVariable("RoleRoot") - that returns a path. You'd typically have your app sitting in a folder called AppRoot under the role root. You'd find your C executable there.
Next, you'll want your app to write its files to an output directory you specify on the command line. You can set up storage in your local VM with your role's properties. Look at the Local Storage tab, and configure a named local storage area:
Now you can get the path to that storage area, in code, and pass it as a command line argument:
var outputStorage = RoleEnvironment.GetLocalResource("MyLocalStorage");
var outputFile = Path.Combine(outputStorage.RootPath, "myoutput.txt");
var cmdline = String.Format("--output {0}", outputFile);
Here's an example of launching your myapp.exe process, with command line arguments:
var appRoot = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot")
+ #"\", #"approot");
var myProcess = new Process()
{
StartInfo = new ProcessStartInfo(Path.Combine(appRoot, #"myapp.exe"), cmdline)
{
CreateNoWindow = false,
UseShellExecute = false,
WorkingDirectory = appRoot
}
};
myProcess.WaitForExit();
Normally you'd set CreateNoWindow to true, but it's easier to debug if you can see the command shell window.
Last thing: Once your app is done creating the file, you'll want to either:
Process it and delete it (it's not in a durable place so eventually it'll disappear)
Change your storage to use a Cloud Drive (durable storage)
Copy your file to a blob (durable storage)
In production, you'll want to add exception-handling, and you can re-route stdout and stderr to be captured. But this sample code should be enough to get you started.
OOPS - one more 'one more thing': When adding your 'myapp.exe' to your project, be SURE to go to its Properties, and set 'Copy to Output Directory' to 'Copy Always' - otherwise your myapp.exe file won't end up in Windows Azure and you'll wonder why things don't work.
EDIT: Pushing results to a blob - a quick example
First get set up a storage account and add to your role's Settings. Say you called it 'AzureStorage' - now set it up in code, get a reference to a blob container, get a reference to a blob within that container, and then perform a file upload to the blob:
CloudStorageAccount storageAccount = CloudStorageAccount.FromConfigurationSetting("AzureStorage");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer outputfiles = blobClient.GetContainerReference("outputfiles");
outputfiles.CreateIfNotExist();
var blobname = "myoutput.txt";
var blob = outputfiles.GetBlobReference(blobname);
blob.UploadFile(outputFile);
In Azure land you shouldn't write to the file system. You should write to SQL Azure, Table Storage or most likely in this case Blob storage (basically, I think you should think of blob storage as the old file system)
This is because:
You could have multiple instances running and you will end up having different files on different instances (which are just virtual machines)
Your instance could potentially be moved at any moment and you would lose the info on the file system as it's not part of your deployment package.
Using one of the three storage options will provide a central repository for all of your instances to access and it will be persisted over a redeployment.