Can't upload to Google Cloud Storage - c#-4.0

Trying to upload a file to Google Cloud Storage using Google APIs Client Library for .Net. Here is my code. It runs fine without any errors but doesn't do the job either. Please advice what I'm missing here, or is it the right way to approach this problem. Apreciate it!
try
{
Google.Apis.Services.BaseClientService.Initializer init = new Google.Apis.Services.BaseClientService.Initializer();
init.ApiKey = "server-apps-API-KEY-HERE";
init.ApplicationName = "Project Default Service Account";
Google.Apis.Storage.v1.StorageService ss = new Google.Apis.Storage.v1.StorageService(init);
Google.Apis.Storage.v1.Data.Object fileobj = new Google.Apis.Storage.v1.Data.Object();
fileobj.ContentType = "image/jpeg";
///READ FILE
string file = #"C:\Photos\TEST.jpg";
System.IO.Stream j = new System.IO.FileStream(file,
System.IO.FileMode.Open,
System.IO.FileAccess.Read);
//New File Name
fileobj.Name = "TEST.jpg";
Google.Apis.Storage.v1.ObjectsResource.InsertMediaUpload insmedia;
insmedia = new Google.Apis.Storage.v1.ObjectsResource.InsertMediaUpload(ss, fileobj, "test-common", j, "image/jpeg");
insmedia.Upload();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.ReadLine();
}

You have not authorized access to your Google Cloud Storage anywhere in your code. See my answer here: Uploading objects to google cloud storage in c# to get example how to do it.

Related

Duplicating File Uploading Process - Asp.net WebApi

I created a web API which allows users to send files and upload to Azure Storage. The way it works is, the client app will connect to API to send one or more files to the file upload controller and controller will take care of rest such as
Upload file to Azure storage
Update database
Works great but I don't think it is the right way to do this because now I can see there are two different processes
Upload file from the client's file system to my web API (server)
Upload file to the Azure storage from API (server)
It gives me the feeling that I am duplicating the upload process as the same file first travels to API (server) and then Azure (destination) from the client (file system). I feel the need of showing two progress-bars to the client for file upload progress (from client to server and then the server to Azure) - That just doesn't make sense to me and I feel that my approach is incorrect.
My API accepts up to 250MBs so you can imagine the overload.
What do you guys think?
//// API Controller
if (!Request.Content.IsMimeMultipartContent("form-data"))
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
var provider = new RestrictiveMultipartMemoryStreamProvider();
var contents = await Request.Content.ReadAsMultipartAsync(provider);
int Total_Files = contents.Contents.Count();
foreach (HttpContent ctnt in contents.Contents)
{
await storageManager.AddBlob(ctnt)
}
////// Stream
#region SteamHelper
public class RestrictiveMultipartMemoryStreamProvider : MultipartMemoryStreamProvider
{
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
var extensions = new[] { "pdf", "doc", "docx", "cab", "zip" };
var filename = headers.ContentDisposition.FileName.Replace("\"", string.Empty);
if (filename.IndexOf('.') < 0)
return Stream.Null;
var extension = filename.Split('.').Last();
return extensions.Any(i => i.Equals(extension, StringComparison.InvariantCultureIgnoreCase))
? base.GetStream(parent, headers)
: Stream.Null;
}
}
#endregion SteamHelper
///// AddBlob
public async Task<string> AddBlob(HttpContent _Payload)
{
CloudStorageAccount cloudStorageAccount = KeyVault.AzureStorage.GetConnectionString();
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("SomeContainer");
cloudBlobContainer.CreateIfNotExists();
try
{
byte[] fileContentBytes = _Payload.ReadAsByteArrayAsync().Result;
CloudBlockBlob blob = cloudBlobContainer.GetBlockBlobReference("SomeBlob");
blob.Properties.ContentType = _Payload.Headers.ContentType.MediaType;
blob.UploadFromByteArray(fileContentBytes, 0, fileContentBytes.Length);
var B = await blob.CreateSnapshotAsync();
B.FetchAttributes();
return "Snapshot ETAG: " + B.Properties.ETag.Replace("\"", "");
}
catch (Exception X)
{
return ($"Error : " + X.Message);
}
}
It gives me the feeling that I am duplicating the upload process as
the same file first travels to API (server) and then Azure
(destination) from the client (file system).
I think you're correct. One possible solution would be have your API generate a Shared Access Signature (SAS) token and return that SAS token/URI to the client whenever a client wishes to upload a file.
Using this SAS URI your client can directly upload the file to Azure Storage without sending it to your API first. Once the file is uploaded successfully by the client, it can send a message to the API to update the database.
You can read more about SAS here: https://learn.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1.
I have also written a blog post long time back on using SAS that you may find useful: https://gauravmantri.com/2013/02/13/revisiting-windows-azure-shared-access-signature/.

NPOI writing XLS file converting to Azure Blob

I'm trying to convert current application that uses NPOI for creating xls document on the server to Azure hosted application. I have little experience with NPOI and Azure so 2 strikes right there. I have the app uploading the xls to Blob container however it is always blank (9 bytes). From what I understand NPOI uses filestream to write to the file so I just changed that to write to the blob container.
Here is what i think are the relevant portions:
internal void GenerateExcel(DataSet ds, int QuoteID, string ReportFileName)
{
string ExcelFileName = string.Format("{0}_{1}.xls",ReportFileName,QuoteID);
try
{
//these 2 strings will get deleted but left here for now to run side by side at the moment
string ReportDirectoryPath = HttpContext.Current.Server.MapPath(".") + "\\Reports";
if (!Directory.Exists(ReportDirectoryPath))
{
Directory.CreateDirectory(ReportDirectoryPath);
}
string ExcelReportFullPath = ReportDirectoryPath + "\\" + ExcelFileName;
if (File.Exists(ExcelReportFullPath))
{
File.Delete(ExcelReportFullPath);
}
// Create a new workbook.
var workbook = new HSSFWorkbook();
//Rest of the NPOI XLS rows cells etc. etc. all works fine when writing to disk////////////////
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference("pricingappreports");
// Create the container if it doesn't already exist.
if (container.CreateIfNotExists())
{
container.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}
// Retrieve reference to a blob with the same name.
CloudBlockBlob blockBlob = container.GetBlockBlobReference(ExcelFileName);
// Write the output to a file on the server
String file = ExcelReportFullPath;
using (FileStream fs = new FileStream(file, FileMode.Create))
{
workbook.Write(fs);
fs.Close();
}
// Write the output to a file on Azure Storage
String Blobfile = ExcelFileName;
using (FileStream fs = new FileStream(Blobfile, FileMode.Create))
{
workbook.Write(fs);
blockBlob.UploadFromStream(fs);
fs.Close();
}
}
I'm uploading to the Blob and the file exists, why doesn't the data get written to the xls?
Any help would be appreciated.
Update: I think I found the problem. Doesn't look like you can write to a file in Blob Storage. Found this Blog which pretty much answers my questions: it doesn't use NPOI but the concept is the same. http://debugmode.net/2011/08/28/creating-and-updating-excel-file-in-windows-azure-web-role-using-open-xml-sdk/
Thanks
Can you install fiddler and check the request and the response packets? You may also need to seek back to 0 between two writes . So the correct code here could be to add the below before trying to write the stream to blob.
workbook.Write(fs);
fs.Seek(0, SeekOrigin.Begin);
blockBlob.UploadFromStream(fs);
fs.Close();
I also noticed that you are using String Blobfile = ExcelFileName instead of String Blobfile = ExcelReportFullPath.

Can't access to Azure Storage folder using Lucene.Net

We decided to implement a search functionality in our API which is developed in ServiceStack, we decided to use Lucene.Net since we heard it was a great indexer to make searches.
We created a worker role whose job is to create the indexes in a Azure Storage folder, we guided ourselves using Leon Cullen's tutorial. We use the AzureDirectory library specified in that post, so we could use the latest Azure SDK.
Then in our API project we added the references for Lucene.Net and AzureDirectory too, our endpoint ended up looking like this:
public object Post(SearchIndex request)
{
List<Product> products = new List<Product>();
var pageSize = -1;
var totalpages = -1;
int.TryParse(ConfigurationManager.AppSettings["PageSize"], out pageSize);
if (request.Page.Equals(0))
{
request.Page = 1;
}
// Get Azure settings
AzureDirectory azureDirectory ;
try
{
// This is the line where we get the Access denied exception thrown at us
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
IndexSearcher searcher;
using (new AutoStopWatch("Creating searcher"))
{
searcher = new IndexSearcher(azureDirectory);
}
using (new AutoStopWatch(string.Format("Search for {0}", request.SearchString)))
{
string[] searchfields = new string[] { "Id", "Name", "Description" };
var hits = searcher.Search(QueryMaker(request.SearchString, searchfields), request.Page * pageSize);
int count = hits.ScoreDocs.Count();
float temp_totalpages = 0;
temp_totalpages = (float)hits.ScoreDocs.Count() / (float)pageSize;
if (temp_totalpages > (int)temp_totalpages)
{
totalpages = (int)temp_totalpages + 1;
}
else
{
totalpages = (int)temp_totalpages;
}
foreach (ScoreDoc match in hits.ScoreDocs)
{
Document doc = searcher.Doc(match.Doc);
int producId = int.Parse(doc.Get("Id"));
Product product = Db.Select<Product>("Id={0}", producId).FirstOrDefault();
products.Add(product);
}
}
return new SearchIndexResult { result = products.Skip((int)((request.Page - 1) * 10)).Take(pageSize).ToList(), PageSize = pageSize, TotalPages = totalpages };
}
catch (Exception e)
{
return new HttpResult(HttpStatusCode.NoContent, "azureDirectory. Parameter: " + request.SearchString + ". e: " + e.Message);
}
}
If we run this locally it works as expected, returning us the results we were expecting. But when we published our API to Azure and tried to access to the search endpoint we received an 403 error message with the message 'Access to the path "D:/AzureDirectory" is denied".
We're confused as to why is trying to access to such folder at all, the name of the folder is wrong and I think it's trying to access a local route, we really don't know why does it work fine locally but once it's deployed to Azure it stops working.
The worker role runs without a problems, but it's the API side that cannot access to the folder in Azure Storage. Are we missing some important step in the configuration? The tutorial we followed wasn't very clear for beginners using Lucene.Net or Azure Storage so we fear we might have missed an important step. We've checked our connection strings and everything seems ok though.
As for reference:
https://github.com/azure-contrib/AzureDirectory/blob/master/AzureDirectory/AzureDirectory.cs
when you do this
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
This executes
var cachePath = Path.Combine(Path.GetPathRoot(Environment.SystemDirectory), "AzureDirectory");
var azureDir = new DirectoryInfo(cachePath);
if (!azureDir.Exists)
azureDir.Create();
var catalogPath = Path.Combine(cachePath, _containerName);
var catalogDir = new DirectoryInfo(catalogPath);
if (!catalogDir.Exists)
catalogDir.Create();
_cacheDirectory = FSDirectory.Open(catalogPath);
So simple solution for you might be to have that directory on site root
DirectoryInfo info = new DirectoryInfo(HostingEnvironment.MapPath("~/"));
azureDirectory = new AzureDirectory(storageAccount, containerName, new SimpleFSDirectory(info), true);
I got it to work.
I just got the latest version of AzureDirectory from GitHub.
Got the latest nuGet packages for Azure Storage etc.
Recreated the index.
In addition to #brykneval answer, I tried his solution but last parameter bool compressBlob = false which he set to true made my local debug fail with 404 exception from AzureDirectory library and when I published to Azure web app, it had exception with message: System.IO.InvalidDataException: Block length does not match with its complement.
I removed last parameter from constructor and everything works like a charm. Hope this helps anyone.

wkhtmltopdf fails on Azure Website

I'm using the https://github.com/codaxy/wkhtmltopdf wrapper to create a pdf from a web page on my website (I pass in an absolute url e.g. http://mywebsite.azurewebsites.net/PageToRender.aspx It works fine in dev and on another shared hosting account but when I deploy to an Azure website it fails and all I get is a ThreadAbortException.
Is it possible to use wkhtmltopdf on azure, and if so, what am I doing wrong?
UPDATE:
This simple example just using Process.Start also doesn't work. It just hangs when run on Azure but works fine on other servers.
string exePath = System.Web.HttpContext.Current.Server.MapPath("\\App_Data\\PdfGenerator\\wkhtmltopdf.exe");
string htmlPath = System.Web.HttpContext.Current.Server.MapPath("\\App_Data\\PdfGenerator\\Test.html");
string pdfPath = System.Web.HttpContext.Current.Server.MapPath("\\App_Data\\PdfGenerator\\Test.pdf");
StringBuilder error = new StringBuilder();
using (var process = new Process())
{
using (Stream fs = new FileStream(pdfPath, FileMode.Create))
{
process.StartInfo.FileName = exePath;
process.StartInfo.Arguments = string.Format("{0} -", htmlPath);
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.UseShellExecute = false;
process.Start();
while (!process.HasExited)
{
process.StandardOutput.BaseStream.CopyTo(fs);
}
process.WaitForExit();
}
}
Check out this SO question regarding a similar issue. This guy seems to have gotten it to work. RotativaPDF is built on top of wkhtmltopdf hence the connection. I am in the process of trying it myself on our Azure site - I will post in the near future with my results.
Azure and Rotativa PDF print for ASP.NET MVC3

FtpWebRequest + Windows Azure = not working?

Is it possible download data on Windows Azure via FtpWebRequest (ASP.NET/C#)?
I am doing this currently and not sure if my problem is that FtpWebRequest is in general not working as expected, or if I have a different failure..
Has sb. did this before?
If you're talking about Windows Azure Storage, then definitely not. FTP is not supported.
If you're working with Compute roles, you could write something to support this, but it's DIY, a la:
http://blog.maartenballiauw.be/post/2010/03/15/Using-FTP-to-access-Windows-Azure-Blob-Storage.aspx
I could solve my problem doing the ftp-request with FTPLib.
This means: You can copy/load files to azure or to an external source!
:-)
Make this working also with AlexFTPS , you just need to add StartKeepAlive.
try
{
string fileName = Path.GetFileName(this.UrlString);
Uri uri = new Uri(this.UrlString);
string descFilePath = Path.Combine(this.DestDir, fileName);
using (FTPSClient client = new FTPSClient())
{
// Connect to the server, with mandatory SSL/TLS
// encryption during authentication and
// optional encryption on the data channel
// (directory lists, file transfers)
client.Connect(uri.Host,
new NetworkCredential("anonymous",
"name#email.com"),
ESSLSupportMode.ClearText
);
client.StartKeepAlive();
// Download a file
client.GetFile(uri.PathAndQuery, descFilePath);
client.StopKeepAlive();
client.Close();
}
}
catch (Exception ex)
{
throw new Exception("Failed to download", ex);
}

Resources