This question has been asked before a while ago. I'm hoping the answer is different today.
3 years ago - Azure storage files force download to browser
I am using Azure blob storage to save images (jpg) for a web site. I am linking directly to the files in my <img> tags and that is working great (have enabled anonymous access). The problem is that if the user clicks on the image (which links directly to the file) they are forced to download it and can not view it in the browser.
Is there a way to set the headers for the blob storage to allow viewing it directly in the browser and not forcing a download.
Update 1:
Based on this How can I view an image from Azure Blob Storage, rather than download it? and this https://social.msdn.microsoft.com/Forums/windowsapps/en-US/b8759195-f490-420b-a587-2bb614366ad2/embedding-images-from-blob-storage-in-ssrs-report-does-not-work
I found that I am not setting the content type, which is causing the problem. I need to set it to "image/jpeg". Im not quite sure how to do that however. This is the code I am using to store the image.
using Microsoft.Azure.Storage.Blob
/// <summary>
/// Save a file to azure blob storage.
/// </summary>
/// <param name="name">Name of file</param>
/// <param name="file">filestream</param>
/// <param name="ct">Cancellationtoken</param>
public async Task<bool> SaveFile(Stream fileStream, string fileName, CancellationToken ct)
{
CloudBlockBlob cloudBlockBlob = _blobContainer.GetBlockBlobReference(fileName);
fileStream.Position = 0;
await cloudBlockBlob.UploadFromStreamAsync(fileStream, ct);
return true;
}
I have not found any type of ".Content", or "Type" property on this. Will keep digging.
Update 2: may have found the solution:
cloudBlockBlob.Properties.ContentType = "image/jpg";
Testing
Update 3: That did it. Using this to set proper content types for images and pdf and they are now viewable in the browser.
if (fileName.EndsWith(".jpg"))
{
cloudBlockBlob.Properties.ContentType = "image/jpg";
}
else if (fileName.EndsWith(".pdf"))
{
cloudBlockBlob.Properties.ContentType = "application/pdf";
}
See question for details. But setting content type can be done with:
cloudBlockBlob.Properties.ContentType = "image/jpg";
Related
I'm currently having issue with Azure File storage when I build up a URL with a shared access signature (SAS) Token. The file will download in the browser, but the content-type is always application/octet-stream rather than changing to match the mime type of the file. If I put the file in Azure BLOB storage and build up a URL with a SAS Token, it sends the correct content-type for my file (image/jpeg).
I've upgraded my storage account from V1 to V2 thinking that was the problem, but it didn't fix it.
Does anyone have a clue what I could try that might get Azure File storage to return the correct content-type using a URL with SAS Token to download the file?
So far these are the only fixes for the content-type that I've found:
Use the Microsoft Azure Storage Explorer to modify the content-type string by hand. You have to right click the file and the left-click properties to get the dialog to appear.
Programmatically modify the file using Microsoft's WindowsAzure.Storage Nuget package.
Surface file download via my own web site and not allow direct access.
For me, none of these are acceptable choices. The first two can lead to mistakes down the road if a user uploads a file via the portal or Microsoft Azure Storage Explore and forgets to change the content type. I also don't want to write Azure Functions or web jobs to monitor and fix this problem.
Since blob storage does NOT have the same problems when uploading via Microsoft Azure Storage Explore or via the portal, the cost is much lower AND both work with SAS Tokens, we are moving towards blob storage instead. We do lose the ability to mount the drive to our local computers and use something like Beyond Compare to do file comparisons, but that is a disadvantage that we can live with.
If anyone has a better solution than the ones mentioned above that fixes this problem, I will gladly up-vote it. However, I think that Microsoft will have to make changes for this problem to be fixed.
When I upload a jpeg file to file share through portal, content-type is changed to application/octet-stream indeed. But I can't reproduce your download problem.
I didn't specify content-type in my SAS request uri, but the file just download as a jpeg file. Have tested in SDK(Account SAS/Stored Access Policy/SAS on file itself) or REST API, both work even without content-type.
You can try to specify the content-type using the code below.
SharedAccessFileHeaders header = new SharedAccessFileHeaders()
{
ContentDisposition = "attachment",
ContentType = "image/jpeg"
};
string sasToken = file.GetSharedAccessSignature(sharedPolicy,header);
Azure blob falls to the default value of 'application/octet-stream' if nothing is provided. To get the correct mimetypes, this is what I did with my flask app:
#app.route('/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
f = request.files['file']
mime_type = f.content_type
print (mime_type)
print (type(f))
try:
blob_service.create_blob_from_stream(container, f.filename, f,
content_settings=ContentSettings(content_type=mime_type))
except Exception as e:
print (str(e))
pass
mime_type was passed to ContentSettings to get the current mimetypes of files uploaded to azure blob.
In nodeJS:
blobService.createBlockBlobFromStream(container, blob, stream, streamLength, { contentSettings: { contentType: fileMimeType } }, callback)
where:
fileMimeType is the type of the file being uploaded
callback is your callback implementation
Reference to method used:
https://learn.microsoft.com/en-us/javascript/api/azure-storage/azurestorage.services.blob.blobservice.blobservice?view=azure-node-latest#createblockblobfromstream-string--string--stream-readable--number--createblockblobrequestoptions--errororresult-blobresult--
Check this out - Microsoft SAS Examples
If you don't want to update the content-type of your file in Azure or it's too much of a pain to update the content-type of all your existing files, you can pass the desired content-type w/ the SAS token as well. The rsct param is where you would specify the desired content-type.
e.g. - https://myaccount.file.core.windows.net/pictures/somefile.pdf?sv=2015-02-21&st=2015-07-01T08:49Z&se=2015-07-02T08:49Z&sr=c&sp=r&rscd=file;%20attachment&rsct=application%2Fpdf&sig=YWJjZGVmZw%3d%3d&sig=a39%2BYozJhGp6miujGymjRpN8tsrQfLo9Z3i8IRyIpnQ%3d
This works with java using com.microsoft.azure azure-storage library. Uploading to Shared Access Signature resource.
InputStream is = new FileInputStream(file);
CloudBlockBlob cloudBlockBlob = new CloudBlockBlob(new URI(sasUri));
cloudBlockBlob.getProperties().setContentType("application/pdf");
cloudBlockBlob.upload(is, file.length());
is.close();
For anyone looking to upload files correctly with a declared Content Type, the v12 client has changed setting Content type. You can use the ShareFileHttpHeaders parameter of file.Create
ShareFileClient file = directory.GetFileClient(fileName);
using FileStream stream = File.OpenRead(#"C:\Temp\Amanita_muscaria.jpg");
file.Create(stream.Length, new ShareFileHttpHeaders { ContentType = ContentType(fileName) });
file.UploadRange(new HttpRange(0, stream.Length),stream);
where ContentType(fileName) is a evaluation of filename, eg:
if (fileName.EndsWith(".txt")) return "text/plain";
// etc
// here you define your file content type
CloudBlockBlob cloudBlockBlob = container.GetBlockBlobReference(file.FileName);
cloudBlockBlob.Properties.ContentType = file.ContentType; //content type
I know that I'm not answering the question, but I do believe the answer is applicable. I had the same problem with a storage account that I need it to have it as a static website. Whenever I upload a blob to a container, the default type is "application/octet-stream" and because of this the index.html get downloaded instead of being displayed.
To change the file type do the following:
# Get Storage Account for its context
$storageAccount = Get-AzStorageAccount -ResourceGroupName <Resource Group Name> -Name <Storage Account Name>
# Get Blobs inside container of storage account
$blobs = Get-AzStorageBlob -Context $storageAccount.Context -Container <Container Name>
foreach ($blob in $blobs) {
$CloudBlockBlob = [Microsoft.Azure.Storage.Blob.CloudBlockBlob] $blob.ICloudBlob
$CloudBlockBlob.Properties.ContentType = <Desired type as string>
$CloudBlockBlob.SetProperties()
}
Note: for Azure File storage you might wanna change the library to [Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob]
I have not tried this, but ideally, you could use ClientOptions to specify a different header. It'd would look something like this:
ClientOptions options = new ClientOptions();
HttpHeader httpHeaders = new HttpHeader("Content-Type", "application/pdf");
options.setHeaders(Collections.singleton(httpHeaders));
blobClient = new BlobClientBuilder()
.endpoint(<SAS-URL>)
.blobName("hello")
.clientOptions(options)
.buildClient();
This way we can provide the our own mime_type as 'content-type'
with open(file.path,"rb") as data:
#blob_client.upload_blob(data)
mime_type =mimetypes.MimeTypes().guess_type(file.name)[0]
blob_client.upload_blob(data,content_type=mime_type)
print(f'{file.name}' " uploaded to blob storage")
Based on this answer: Twong answer
Example if you are using .NET (C#) API to proxy/generate SAS url from ShareFileClient (ShareFileClient class description):
if (downloadClient.CanGenerateSasUri)
{
var sasBuilder = new ShareSasBuilder(ShareFileSasPermissions.Read, DateTimeOffset.Now.AddDays(10))
{
ContentType = "application/pdf",
ContentDisposition = "inline"
};
return downloadClient.GenerateSasUri(sasBuilder);
}
Above example setup 10 days long token for pdf file which will be open into new browser tab (especially on Apple iOS).
Solution in Java is to specify the content-type when generating the signature image url:
blobServiceSasSignatureValues.setContentType("image/jpeg");
I am using PCLStorage to store image in localStorage and access them via path returned.
Its working fine but the problem is, whenever I again start debugging the application, images are not accessible.
Where actually it stores images in Local Storage? Is it not permanent location?
I want to store images in fileSystem and related data and image path in sqlite. Its an offline application so need to store this data permanently.
Any suggestions for this would be helpful.
Thanks
Try below steps.
The interface is fairly simple because we're really only interested in passing the byte array and a filename when we save the image to disk.
public interface IPicture
{
void SavePictureToDisk (string filename, byte[] imageData);
}
DependencyService will delegate image saving to the appropriate class. The usage for the DependencyService is:
DependencyService.Get<IPicture>().SavePictureToDisk("ImageName", GetByteArray());
Create Classes in Platform-Specific Projects
iOS Project
[assembly: Xamarin.Forms.Dependency(typeof(Picture_iOS))]
namespace ImageSave.iOS
{
public class Picture_iOS: IPicture
{
public void SavePictureToDisk(string filename, byte[] imageData)
{
var MyImage = new UIImage(NSData.FromArray(imageData));
MyImage.SaveToPhotosAlbum((image, error) =>
{
//you can retrieve the saved UI Image as well if needed using
//var i = image as UIImage;
if(error != null)
{
Console.WriteLine(error.ToString());
}
});
}
}
}
I got the answer for this question on Xamarin Forum so just updating the link here so it can help others.
https://forums.xamarin.com/discussion/comment/217282/#Comment_217282
As explained here that everytime we redeploy app the core path changes thats why on redeploying I was not able to find images on the path where I saved it. So now I am only saving the partial path like the folderName\the image name and rest of the core path I am finding on runtime.
This solved my problem.
I want to know if there is a benefit to zipping files before sending them to Azure Blob Storage - strictly for transfer purposes. Put another way, will pre-zipping files make file transfers any faster when going to/from blob storage? Or does this automatically happen at the transport level by using gzip?
As of 12th August 2015 Azure blob storage (when mounted to the CDN) now supports automatic GZip compression.
Compression method - Supported compression methods are
gzip/deflate/bzip2, a supported method must be set in the
Accept-Encoding Request Header.
Improve performance by compressing files
UPDATE
I'm unsure of what and how I originally did this, but all I can think is that I was looking at the results incorrectly. Everything I can read about azure (from MSDN, to the code itself) is now telling me that Azure does not support gzip for transfer purposes. I do not know under what circumstances I was able to get the following results and am unable to reproduce them now. Needless to say, I'm very disappointed.
(THIS ANSWER IS INCORRECT, SEE THE UPDATE ABOVE) The answer is no, there is no benefit for transfer speed purposes to zip a file before sending to blob storage. By turning on Fiddler, you can see that the transport level automatically gzips content across the wire. Screenshots below confirm this:
Edit 1 - Quick Clarifications for Gaurav
The byte array that comes back in code has a length of 386803, but the network card only saw 23505 bytes go by, because it was gzipped by Azure in the response. I didn't have to do anything for that to happen.
Here is the code I'm using to initiate the request from Blob Storage
public Byte[] Read(string containerName, string filename)
{
CheckContainer(containerName);
Initialize();
// Retrieve reference to a previously created container.
CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
// Retrieve reference to a blob named "photo1.jpg".
CloudBlockBlob blockBlob = container.GetBlockBlobReference(filename);
byte[] buffer;
// Save blob contents to a file.
using (var stream = new MemoryStream())
{
blockBlob.DownloadToStream(stream);
stream.Seek(0, SeekOrigin.Begin);
buffer = new byte[stream.Length];
stream.Read(buffer, 0, (int)stream.Length);
}
return buffer;
}
I'm working on a personal project to manage users of my club, it's hosted on the free Azure package (for now at least), partly as an experiment to try out Azure. Part of creating their records is to add a photo, so I've got a Contact Card view that lets me see who they are, when they came and a photo.
I have installed ImageResizer and it's really easy to resize the 10MP photos from my camera and save them to the file system locally, but it seems that for Azure I need to use their Blobs to Upload Pictures to Windows Azure Web Sites, and that's new to me. The documentation on ImageResizer says that I need to use AzureReader2 in order to work with Azure blobs but it isn't free. It also says in their best practices #5 to
Use dynamic resizing instead of pre-resizing your images.
Which is not what I was thinking, I was going to resize to 300x300 and 75x75 (for thumbnail) when creating the users record. But if I should be storing full size images as blobs and dynamically resizing on the way out then can I just use standard means to Upload a blob into a container to save it to Azure, then when I want to display the images use the ImageResizer and pass it each image to resize as required. That way not needing to use the AzureReader2, or have I misunderstood what it does / how it works?
Is there another way to consider?
I've not yet implemented cropping, but that's next to tackle when I've worked out how to actually store the images properly
With some trepidation, I'm going to disagree with astaykov here. I believe you CAN use ImageResizer with Azure WITHOUT needing AzureReader2. Maybe I should qualify that by saying 'It works on my setup' :)
I'm using ImageResizer in an MVC 3 application. I have a standard Azure account with an images container.
Here's my test code for the view:
#using (Html.BeginForm( "UploadPhoto", "BasicProfile", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
<input type="file" name="file" />
<input type="submit" value="OK" />
}
And here's the corresponding code in the Post Action method:
// This action handles the form POST and the upload
[HttpPost]
public ActionResult UploadPhoto(HttpPostedFileBase file)
{
// Verify that the user selected a file
if (file != null && file.ContentLength > 0)
{
string newGuid = Guid.NewGuid().ToString();
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("images");
// Retrieve reference to the blob we want to create
CloudBlockBlob blockBlob = container.GetBlockBlobReference(newGuid + ".jpg");
// Populate our blob with contents from the uploaded file.
using (var ms = new MemoryStream())
{
ImageResizer.ImageJob i = new ImageResizer.ImageJob(file.InputStream,
ms, new ImageResizer.ResizeSettings("width=800;height=600;format=jpg;mode=max"));
i.Build();
blockBlob.Properties.ContentType = "image/jpeg";
ms.Seek(0, SeekOrigin.Begin);
blockBlob.UploadFromStream(ms);
}
}
// redirect back to the index action to show the form once again
return RedirectToAction("UploadPhoto");
}
This is 'rough and ready' code to test the theory and could certainly stand improvement but, it does work both locally and when deployed on Azure. I can also view the images I've uploaded, which are correctly re-sized.
Hope this helps someone.
The answer to the concrete question:
If using ImageResizer with Azure blobs do I need the AzureReader2
plugin?
is YES. And as described in the Image Resizer's documentation - that plugin is used to read/process/serve images out of Blob Storage. So there is no doubt - if you are going to use Image Resizer, AzureReader2 is your needed plugin to make things right. It will take care of Blob uploads/serve.
Although I question Image Resizer's team competency on Windows Azure, since they are referencing Azure SDK v.2, while the most current version for Azure SDK is 1.8. What they mean is the Azure Storage Client Library, which has versions 1.7 and 2.x. Whereas version 2.x is recommended one to use and comes with Azure SDK 1.8. So, do not search for Azure SDK 2.0, install the latest one, which is 1.8. And by the way, use the Nuget Package Manager to install the Azure Storage Library v. 2.0.x.
You can also upload resized versions to azure. So, you first upload the original image as a blob, say with the name /original/xxx.jpg; then you create a resize of the image and upload that to azure with the name say /thumbnail/xxx.jpg. If you want to create the resized versions on the fly or on a separate thread, you may need to temporarily save the original to disk.
The question I am asking is specifically because I don't want to use AzureDirectory project. I am just trying something on my own.
cloudStorageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=http;AccountName=xxxx;AccountKey=xxxxx");
blobClient=cloudStorageAccount.CreateCloudBlobClient();
List<CloudBlobContainer> containerList = new List<CloudBlobContainer>();
IEnumerable<CloudBlobContainer> containers = blobClient.ListContainers();
if (containers != null)
{
foreach (var item in containers)
{
Console.WriteLine(item.Uri);
}
}
/* Used to test connectivity
*/
//state the file location of the index
string indexLocation = containers.Last().Name.ToString();
Lucene.Net.Store.Directory dir =
Lucene.Net.Store.FSDirectory.Open(indexLocation);
//create an analyzer to process the text
Lucene.Net.Analysis.Analyzer analyzer = new
Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_30);
//create the index writer with the directory and analyzer defined.
bool findexExists = Lucene.Net.Index.IndexReader.IndexExists(dir);
Lucene.Net.Index.IndexWriter indexWritr = new Lucene.Net.Index.IndexWriter(dir, analyzer,!findexExists, Lucene.Net.Index.IndexWriter.MaxFieldLength.UNLIMITED);
//create a document, add in a single field
Lucene.Net.Documents.Document doc = new Lucene.Net.Documents.Document();
string path="D:\\try.html";
TextReader reader = new FilterReader("D:\\try.html");
doc.Add(new Lucene.Net.Documents.Field("url",path,Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.NOT_ANALYZED));
doc.Add(new Lucene.Net.Documents.Field("content",reader.ReadToEnd().ToString(),Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.ANALYZED));
indexWritr.AddDocument(doc);
indexWritr.Optimize();
indexWritr.Commit();
indexWritr.Close();
Now the issue is after indexing is completed I am not able to see any files created inside the container. Can anybody help me out?
You're using the FSDirectory there, which is going to write files to the local disk.
You're passing it a list of containers in blob storage. Blob storage is a service made available over a REST API, and is not addressable directly from the file system. Therefore the FSDirectory is not going to be able to write your index to storage.
Your options are :
Mount a VHD disk on the machine, and store the VHD in blob storage. There are some instructions on how to do this here: http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/15/mount-a-page-blob-vhd-in-any-windows-azure-vm-outside-any-web-worker-or-vm-role.aspx
Use the Azure Directory, which you refer to in your question. I have rebuilt the AzureDirectory against the latest storage SDK: https://github.com/richorama/AzureDirectory
Another alternative for people looking around - I wrote up a directory that uses the azure shared cache (preview) which can be an alternative for AzureDirectory (albeit for bounded search sets)
https://github.com/ajorkowski/AzureDataCacheDirectory