Generating URL from amazon s3 (streaming files from amazon s3) - c#-4.0

I have a problem trying to download files from amazon s3. I have files stored on amazom s3 and to access these files, users need to be authenticated. I'm trying to find a way to stream files without downloading each file from amazon onto my server and then from my server to the end client. I just want to be able to stream the file direct by generating the url, can you suggest some ideas?

I know this is an old question but I have a solution for this exact scenario.
Basically, you want to present a file that is stored on Amazon S3 to the user's browser in such a way that it forces the download rather than opening in the browser window. Typically, if you store the file locally on the server then this is simple to do. But, you don't want to have to first download the file to your server from S3 just to send it over to the client, so...
You'll need the Amazon S3 SDK installed which you can get from Nuget here: https://www.nuget.org/packages/AWSSDK.S3/
Also, make sure you're referencing these namespaces:
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
Here's the code that I use to force download a remote file on Amazon S3:
byte[] buffer = new byte[4096];
GetObjectRequest getObjRequest = new GetObjectRequest {
BucketName = "**yourbucketname**",
Key = "**yourobjectkey**"
};
IAmazonS3 client = new AmazonS3Client("**youraccesskeyid**", "**yoursecretkey**", RegionEndpoint.EUWest2); //< set your own region
using (GetObjectResponse response = client.GetObject(getObjRequest))
using (Stream stream = response.ResponseStream)
{
int bytesRead = 0;
Response.AppendHeader("Content-Disposition", "attachment; filename=" + Path.GetFileName(response.Key));
Response.AppendHeader("Content-Length", response.ContentLength.ToString());
Response.ContentType = "application/force-download";
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0 && Response.IsClientConnected)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
Response.OutputStream.Flush();
buffer = new byte[4096];
}
}
As always, I'm sure there are better ways of accomplishing this, but this code works for me.

AWS provides SDK for .Net which will let them download (and upload files).
For example here: http://ceph.com/docs/master/radosgw/s3/csharp/
A quick google should give you the answer. If there is something specific which you are unable to do, then please explain your question a little more.

Related

Does Azure Blob Storage supports partial content 206 by default?

I am using Azure blob storage to storage all my images and videos. I have implemented the upload and fetch functionality and it's working quite good. I am facing 1 issue while loading the videos, because when I use the url which is generated after uploading that video on Azure blob storage, it's downloading all the content first before rendering it to the user. So if the video size is 100 mb, it'll download all the 100 mb and till than user won't able to see the video.
I have done a lot of R&D and came to know that while rendering the video, I need to fetch the partial content (status 206) rather than fetching the whole video at a time. After adding the request header "Range:bytes-500", I tried to hit the blog url, but it's still downloading the whole content. So I have checked with some open source video URLs and tried to hit the video URL along with the "Range" request header and it was successfully giving 206 response status, which means it was properly giving me the partial content instead of the full video.
I read some forum and they are saying Azure storage supports the partial content concept and need to enable it from the properties. But I have checked all the options under the Azure storage account but didn't find anything to enable this functionality.
Can anyone please help me out to resolve this or if there's anything on Azure portal that I need to enable? It's something that I have been doing the R&D for this since a week now. Any help would be really appreciated.
Thank you! Stay safe.
Suppose the Accept-Ranges is not enabled, from this blog I got it needs to set the default version of the service.
Below is a sample code to implement it.
var credentials = new StorageCredentials("account name", "account key");
var account = new CloudStorageAccount(credentials, true);
var client = account.CreateCloudBlobClient();
var properties = client.GetServiceProperties();
properties.DefaultServiceVersion = "2019-07-07";
client.SetServiceProperties(properties);
Below is a return header comparison after setting the property.
Before:
After:
Assuming the video content is MPEG-4 the issue may be the media itself needs to have the moov atom position changed from the end of the file to the beginning. The browser won't render the video until it finds the moov atom in the file therefore you want to make sure the atom is at the start of the file which can be accomplished using ffmpeg with the "FastStart". Here's a good article with more detail : HERE
You just need to update your Azure Storage version. It will work automatically after the update.
Using Azure CLI
Just run:
az storage account blob-service-properties update --default-service-version 2021-08-06 -n yourStorageAccountName -g yourStorageResourceGroupName
List of avaliable versions:
https://learn.microsoft.com/en-us/rest/api/storageservices/previous-azure-storage-service-versions
To see your current version, open a file and inspect the x-ms-version header
following is the SDK I used to download the contents:
var container = new BlobContainerClient("UseDevelopmentStorage=true", "sample-container");
await container.CreateIfNotExistsAsync();
BlobClient blobClient = container.GetBlobClient(fileName);
Stream stream = new MemoryStream();
var result = await blobClient.DownloadToAsync(stream, cancellationToken: ct);
which DOES download the whole file right away! Unfortunately the solution provided in other answers seems to be referencing another SDK? So for the SDK that I use - the solution is to use the method OpenReadAsync:
long kBytesToReadAtOnce = 300;
long bytesToReadAtOnce = kBytesToReadAtOnce * 1024;
//int mbBytesToReadAtOnce = 1;
var result = await blobClient.OpenReadAsync(0, bufferSize: (int)bytesToReadAtOnce, cancellationToken: ct);
By default - it fetches 4mb of data, so you have to override the value to smaller amount if you want your app to have smaller memory footprint.
I think that internally the SDK sends the requests with the byte range already set. So all you have to do is enable the partial content support in Web API like this:
return new FileStreamResult(result, contentType)
{
EnableRangeProcessing = true,
};

How to Download a File (from URL) in Typescript

Update: This question used to ask about Google Cloud Storage, but I have since realized the issue actually is reproducable merely trying to save the download to local disk. Thus, I am rephrasing the question to be entirely about file downloads in Typescript and to no longer mention Google Cloud Storage.
When attempting to download and save a file in Typescript with WebRequests (though I experienced the same issue with requests and request-promises), all the code seems to execute correctly, but the resultant file is corrupted and cannot be viewed. For example, if I download an image, the file is not viewable in any applications.
// Seems to work correctly
const download = await WebRequest.get(imageUrl);
// `Buffer.from()` also takes an `encoding` parameter, but it's unclear how to determine the encoding of a download
const imageBuffer = Buffer.from(download.content);
// I *think* this line is straightforward
const imageByteArray = new Uint8Array(imageBuffer);
// Saves a corrupted file
const file = fs.writeFileSync("/path/to/file.png", imageByteArray);
I suspect the issue lies within the Buffer.from call not correctly interpreting the downloaded content, but I'm not sure how to do it right. Any help would be greatly appreciated.
Thanks so much!
From what I saw in the examples for web-request, download.content is just a string. If you want to upload a string to Cloud Storage using the node SDK, you can use File.save, passing that string directly.
Alternatively, you could use one the solutions seen here.

How to store files in firebase using node.js

I have a small assignment where I will have a URL to a document or a file like google drive link or dropbox link.
I have to use this link to store that file or doc in firebase using nodejs. How should i start?
Little head's up might help. What should i use? Please help I'm stuck here.
The documentation for using the admin SDK is mostly covered in GCP documentation.
Here's a snippet of code that shows how you could upload a image directly to Cloud Storage if you have a URL for it. Any public link works, whether it's shared from Dropbox or somewhere else on the internet.
Edit 2020-06-01 The option to upload directly from URL was dropped in v2.0 of the SDK (4 September 2018): https://github.com/googleapis/nodejs-storage/releases/tag/v2.0.0
const fileUrl = 'https://www.dropbox.com/some/file/download/link.jpg';
const opts = {
destination: 'path/to/file.jpg',
metadata: {
contentType: 'image/jpeg'
}
};
firebase.storage().bucket().upload(fileUrl, opts);
This example is using the default bucket in your application and the opts object provides file upload options for the API call.
destination is the path that your file will be uploaded to in Google Cloud Storage
metadata should describe the file that you're uploading (see more examples here)
contentType is the file MIME type that you are uploading

How do we use AWS/S3?

I need some simple example to get started with AWS/S3 usage.
Here is the situation; an iOS app of mine has been transfered from Parse.com to Parse-Server / Heroku.
All is working fine, but I will at some point need file storage for images or sound files.
I have already followed this and configured an S3Adapter.
My problem now is : "How to use it?"
I would like to find some sample code using this S3Adapter that I just configured to save something and retrieve it.
If you already configured S3 in your parse server and provide all the relevant details like bucket, keys etc. the next thing is to test it and check that parse really store your files on S3 and not on GridStore (which is the default).
In order to test it please go through the following steps:
Open your index.js file which located under the root folder of your parse server project and check that your files adapter is S3. It should look something like this (from parse server wiki):
var api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
appId: process.env.APP_ID || 'APPLICATION_ID',
masterKey: process.env.MASTER_KEY || 'MASTER_KEY',
...
filesAdapter: new S3Adapter(
"S3_ACCESS_KEY",
"S3_SECRET_KEY",
"S3_BUCKET", {
directAccess: true
}
),
...
});
Next you need to save some file in your iOS client side. You need to create a new PFFile and just call the saveInBackground method in order to save this file. before saving the file parse-server will check if you provide custom files adapter and if you did it will try to use it, if not it will go to the default (GridStore on MongoDB). So your iOS code should look like the following:
objective c
NSData * imageData = UIImagePNGRepresentation(image);
PFFile * imageFile = [PFFile fileWithName: #"image.png"
data: imageData
];
[imageFile saveInBackground];
swift
let imageData = UIImagePNGRepresentation(image)
let imageFile = PFFile(name:"image.png", data:imageData)
imageFile.saveInBackground()
After the file is being saved you can go to your Bucket in AWS and check if the file has been added there.
Hope it helps. If you need more info please let me know.

does azure blob storage use gzip across the wire

I want to know if there is a benefit to zipping files before sending them to Azure Blob Storage - strictly for transfer purposes. Put another way, will pre-zipping files make file transfers any faster when going to/from blob storage? Or does this automatically happen at the transport level by using gzip?
As of 12th August 2015 Azure blob storage (when mounted to the CDN) now supports automatic GZip compression.
Compression method - Supported compression methods are
gzip/deflate/bzip2, a supported method must be set in the
Accept-Encoding Request Header.
Improve performance by compressing files
UPDATE
I'm unsure of what and how I originally did this, but all I can think is that I was looking at the results incorrectly. Everything I can read about azure (from MSDN, to the code itself) is now telling me that Azure does not support gzip for transfer purposes. I do not know under what circumstances I was able to get the following results and am unable to reproduce them now. Needless to say, I'm very disappointed.
(THIS ANSWER IS INCORRECT, SEE THE UPDATE ABOVE) The answer is no, there is no benefit for transfer speed purposes to zip a file before sending to blob storage. By turning on Fiddler, you can see that the transport level automatically gzips content across the wire. Screenshots below confirm this:
Edit 1 - Quick Clarifications for Gaurav
The byte array that comes back in code has a length of 386803, but the network card only saw 23505 bytes go by, because it was gzipped by Azure in the response. I didn't have to do anything for that to happen.
Here is the code I'm using to initiate the request from Blob Storage
public Byte[] Read(string containerName, string filename)
{
CheckContainer(containerName);
Initialize();
// Retrieve reference to a previously created container.
CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
// Retrieve reference to a blob named "photo1.jpg".
CloudBlockBlob blockBlob = container.GetBlockBlobReference(filename);
byte[] buffer;
// Save blob contents to a file.
using (var stream = new MemoryStream())
{
blockBlob.DownloadToStream(stream);
stream.Seek(0, SeekOrigin.Begin);
buffer = new byte[stream.Length];
stream.Read(buffer, 0, (int)stream.Length);
}
return buffer;
}

Resources