In c# using StreamWriter to write to AZURE can i use the URI as the path - azure

I am using nuGet CsvHelper to write a datatable to a new CSV file. It works fine with path of a physical location Ie C:/temp/mydrop box. But I need it to streamwrite to a Azure Storage container.
when the program gets to here
StreamWriter sw = new StreamWriter(strFilePath, false);
I get the following error:
The filename, directory name, or volume label syntax is incorrect. : 'C:\Users\myuser\source\repos\sample\bin\Debug\net6.0\https://mystorage.blob.core.windows.net/myContainer-processedfiles/myfle.csv''
I am passing in https://mystorage.blob.core.windows.net/myContainer-processedfiles/myfle.csv

Related

NullPointerException downloading pdf file from azure app service, Spring-boot Java 8

I have a spring boot application with Java 8 in which I can create and download a PDF file with Itext PDF library, in local and development environment this works fine but when I deploy the application in my azure app service I can't download this file, I had a NullPointerException.
2020-11-24T13:04:12.310887019Z: [INFO] ****Downloading PDF file: /home/site/descargas/
2020-11-24T13:04:13.814155878Z: [INFO] *** ERROR : java.lang.NullPointerException
2020-11-24T13:04:13.815304521Z: [ERROR] com.itextpdf.text.DocumentException: java.lang.NullPointerException
2020-11-24T13:04:13.816149053Z: [ERROR] at com.itextpdf.text.pdf.PdfDocument.add(PdfDocument.java:821)
2020-11-24T13:04:13.816580670Z: [ERROR] at com.itextpdf.text.Document.add(Document.java:277)
And my code is:
document = new Document();
FileOutputStream coursesFile = new FileOutputStream(DIRECTORIO_TEMP+"cursos.pdf");
PdfWriter writer = PdfWriter.getInstance(document, coursesFile);
PDFEvent eventoPDF = new PDFEvent();
writer.setPageEvent(eventoPDF);
document.open();
//margenes de documento
document.setMargins(45, 45, 80, 40);
PdfPTable table = new PdfPTable(new float[] {15,35,50,10});
table.setWidthPercentage(100);
table.getDefaultCell().setPaddingBottom(20);
table.getDefaultCell().setPaddingTop(20);
Stream.of("ID", "Área", "Nombre", "Horas").forEach(columnTitle -> {
PdfPCell header = new PdfPCell();
header.setBackgroundColor(COLOR_INFRA);
header.setBorderWidth(1);
header.setHorizontalAlignment(Element.ALIGN_CENTER);
header.setPhrase(new Phrase(columnTitle, fuenteCabecera));
table.addCell(header);
});
for (Curso curso : cursos) {
table.setHorizontalAlignment(Element.ALIGN_CENTER);
table.addCell(new Phrase(curso.getCodigoCurso(), fuenteNegrita ) );
table.addCell(new Phrase(curso.getAreaBean().getNombre_area(), fuenteNormal ));
table.addCell(new Phrase(curso.getNombreCurso(), fuenteNormal ));
PdfPCell celdaHoras = new PdfPCell( new Phrase(curso.getHoras() + "", fuenteNormal ) );
celdaHoras.setHorizontalAlignment(Element.ALIGN_CENTER);
table.addCell(celdaHoras);
}
document.add(new Paragraph(Chunk.NEWLINE));
document.add(new Paragraph(Chunk.NEWLINE));
document.add(table);
document.close();
coursesFile.close();
The file permission in my Azure app service are:
Newest
Answer For Linux:
I don't know what method you used to deploy the whole process of webapp. However, the descargas folder will not be created automatically in any way.
No matter what method you use to deploy the webapp, it is recommended that you log in to kudu and check the descargas folder and whether the files under the folder exist. It is not recommended to use FTP to upload.
In addition, there should be no concept of D:\ drive and virtual application under Linux. It is recommended to use relative paths in the code to read files.
And
PRIVIOUS
Answer For Windows:
This error occurs because the access path must be wrong. I don't know if your code uses a relative path or an absolute path.
So my advice is:
Use absolute paths to access files.
I have test and it works for me, both in my local and on azure.
The file path like D:\home\site\myfiles\a.pdf .
Use virtual directories to access files
You also can use Virtual Directory to access your file, the path you access by broswer,like https://www.aa.azurewebsites.net/myfiles/a.pdf, and you also can check it by kudu, like D:\home\site\myfiles\a.pdf.
For more details, you can refer to the offical doc,https://learn.microsoft.com/en-us/azure/app-service/configure-common#windows-apps-uncontainerized .

Copy css file from one subdirectory to another one in CONTAINER BLOB

Scenario:
I copy .css file from one subdirectory to another in Azure Storage Container. It is done from C# code level in my application. This is css style file for my website. Unfortunately I received error in my browser console during loading page:
Error
Resource interpreted as Stylesheet but transferred with MIME type application/octet-stream:
"SOME_PATH/template/css/styles.css?d=00000000-0000-0000-0000-000000000000".
Knowledge:
I know that it is why my file is sended as octet-stream instead of text/css. What can I do to say Azure to treat this file as text/css?
Edit: My code
string newFileName = fileToCopy.Name;
StorageFile newFile = cmsDirectory.GetStorageFileReference(newFileName);
using (var stream = new MemoryStream())
{
fileToCopy.DownloadToStream(stream);
stream.Seek(0, SeekOrigin.Begin);
newFile.UploadFromStream(stream);
}
where DownloadToStream and UploadToStream are methodes in my class:
CloudBlob.DownloadToStream(target);
and
CloudBlob.DownloadToStream(target);
CloudBlob is CloudBlockBlob type
You can set content type of blob via property ContentType
look at:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.windowsazure.storage.blob.blobproperties.contenttype
Download AzCopy - http://aka.ms/azcopy
If you specify /SetContentType without a value, AzCopy sets each blob or file's content type according to its file extension.
Run this command on Windows
AzCopy /Source:C:\myfolder\ /Dest:https://myaccount.blob.core.windows.net/myContainer/ /DestKey:key /Pattern:ab /SetContentType
More details: https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy?toc=%2fazure%2fstorage%2fblobs%2ftoc.json
Use the Microsoft Azure Storage Explorer to modify the content-type string by hand for already existing file. Right click the blob file in explorer and the Left click on properties, scroll down to change the file format.

How to get FileTrigger to work with Azure file storage in Webjob

I have a webjob that I have set up to be triggered when a file is added to a directory:
[FileTrigger(#"<DIR>\<dir>\{name}", "*", WatcherChangeTypes.Created, autoDelete: true)] Stream file,
I have it configured:
var config = new JobHostConfiguration
{
JobActivator = new NinjectActivator(kernel)
};
var filesConfig = new FilesConfiguration();
#if DEBUG
filesConfig.RootPath = #"C:\Temp\";
#endif
config.UseFiles(filesConfig);
config.UseCore();
The path is for working locally and I was expecting that commenting out the FilesConfiguration object leaving it default would allow it to pick up the connection string I have set up and trigger when files are added. This does not happen it turns out that by default the RootPath is set to "D:\Home" and produces an InvalidOperationException
System.InvalidOperationException : Path 'D:\home\data\<DIR>\<dir>' does not exist.
How do I get the trigger to point at the File storage area of the storage account I have set up for it. I have tried removing the FilesConfiguration completely from Program.cs in the hope that it would work against the settings but it only produces the same Exception.
System.InvalidOperationException : Path 'D:\home\data\\' does not exist.
When you publish to azure, the default directory is D:\HOME\DATA, so when you run webjob it could not find the path so you get the error message.
How do I get the trigger to point at the File storage area of the storage account I have set up for it.
The connectionstring you have set have two applies: one is used for dashboard logging and the other is used for application functionality (queues, tables, blobs).
It seems that you could not get filetrigger working with azure file storage.
So, if you want to invoke your filetrigger when you create new file, you could go to D:\home\data\ in KUDU to create a DIR folder and then create new .txt file in it.
The output is as below:
BTW, it seems that you'd better not use autoDelete when you create file, if use you will get error like:
NotSupportedException: Use of AutoDelete is not supported when using change type 'Changed'.

Automating App Deployment in Azure with LocalResource

I'm currently attempting to automate the deployment of an application to an Azure Worker role by pulling a file into the role from blob storage and working with it via a batch script, also located in blob storage. I'm using onStart to accomplish this. Here's a reduced version of my onStart method:
Getting ready to pull the files down:
public override bool OnStart()
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
container.CreateIfNotExist();
CloudBlob file = container.GetBlobReference("file.bat");
Actually getting the files into the role:
LocalResource localResource = RoleEnvironment.GetLocalResource("localStore");
string filePath = System.IO.Path.Combine(localResource.RootPath, "file.bat");
using (var fileStream = System.IO.File.OpenWrite(#filePath))
{
file.DownloadToStream(fileStream);
}
This is how I get the batch file and the dependencies into the role. My problem now is - originally, I built the batch file with the assumption that the other files would be dropped right on C:\. For example - C:\installer.exe, C:\archive.zip, etc. But now the files are in localStorage.
I'm thinking I can either A) Somehow tell the batch file where localStorage is by dynamically writing the script onStart, or B) change localStorage to use C:\.
I'm not sure how to do either, or what the best thing to do here would be. Thoughts?
I would not change the LocalStorage to use C: (how would you do this anyways?). Take a look at Steve's blogpost: Using a Local Storage Resource From a Startup Task. He explains how you can get a LocalResource using powershell (and even call that script from a batch file).
And why not use the Windows Azure Bootstrapper? This is a little tool that can help you with the configuration of your role without having to write any code, you simply call it from a startup task and it can download files (also from blob storage like you're doing), work with local resources, ...
bootstrapper.exe -get http://download.microsoft.com/download/F/3/1/F31EF055-3C46-4E35-AB7B-3261A303A3B6/AspNetMVC3ToolsUpdateSetup.exe -lr $lr(temp) -run $lr(temp)\AspNetMVC3ToolsUpdateSetup.exe -args /q
Note: Instead of using absolute references in your batch file, make it use relative paths using %~dp0

Running native code on Azure

I am trying to run a C executable on Azure. I have many workerRoles and they continuously check a Job Queue. If there is a job in the queue, a worker role runs an instance of the C executable as a process according to the command line arguments stored in a job class. The C executable creates some log files normally. I do not know how to access those created files. What is the logic behind it? Where are the created files stored? Can anyone explain me? I am new to azure and C#.
One other problem is that all of the working instances of the C executable need to read a data file. How can I distribute that required file?
First, realize that in Windows Azure, your worker role is simply running inside a Windows 2008 Server environment (either SP2 or R2). When you deploy your app, you would deploy your C executable as well (or grab it from blob storage, but that's a bit more advanced). To find out where your app lives on disk, call Environment.GetEnvironmentVariable("RoleRoot") - that returns a path. You'd typically have your app sitting in a folder called AppRoot under the role root. You'd find your C executable there.
Next, you'll want your app to write its files to an output directory you specify on the command line. You can set up storage in your local VM with your role's properties. Look at the Local Storage tab, and configure a named local storage area:
Now you can get the path to that storage area, in code, and pass it as a command line argument:
var outputStorage = RoleEnvironment.GetLocalResource("MyLocalStorage");
var outputFile = Path.Combine(outputStorage.RootPath, "myoutput.txt");
var cmdline = String.Format("--output {0}", outputFile);
Here's an example of launching your myapp.exe process, with command line arguments:
var appRoot = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot")
+ #"\", #"approot");
var myProcess = new Process()
{
StartInfo = new ProcessStartInfo(Path.Combine(appRoot, #"myapp.exe"), cmdline)
{
CreateNoWindow = false,
UseShellExecute = false,
WorkingDirectory = appRoot
}
};
myProcess.WaitForExit();
Normally you'd set CreateNoWindow to true, but it's easier to debug if you can see the command shell window.
Last thing: Once your app is done creating the file, you'll want to either:
Process it and delete it (it's not in a durable place so eventually it'll disappear)
Change your storage to use a Cloud Drive (durable storage)
Copy your file to a blob (durable storage)
In production, you'll want to add exception-handling, and you can re-route stdout and stderr to be captured. But this sample code should be enough to get you started.
OOPS - one more 'one more thing': When adding your 'myapp.exe' to your project, be SURE to go to its Properties, and set 'Copy to Output Directory' to 'Copy Always' - otherwise your myapp.exe file won't end up in Windows Azure and you'll wonder why things don't work.
EDIT: Pushing results to a blob - a quick example
First get set up a storage account and add to your role's Settings. Say you called it 'AzureStorage' - now set it up in code, get a reference to a blob container, get a reference to a blob within that container, and then perform a file upload to the blob:
CloudStorageAccount storageAccount = CloudStorageAccount.FromConfigurationSetting("AzureStorage");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer outputfiles = blobClient.GetContainerReference("outputfiles");
outputfiles.CreateIfNotExist();
var blobname = "myoutput.txt";
var blob = outputfiles.GetBlobReference(blobname);
blob.UploadFile(outputFile);
In Azure land you shouldn't write to the file system. You should write to SQL Azure, Table Storage or most likely in this case Blob storage (basically, I think you should think of blob storage as the old file system)
This is because:
You could have multiple instances running and you will end up having different files on different instances (which are just virtual machines)
Your instance could potentially be moved at any moment and you would lose the info on the file system as it's not part of your deployment package.
Using one of the three storage options will provide a central repository for all of your instances to access and it will be persisted over a redeployment.

Resources