Amazon glacier storage upload API method - amazon

I am trying to upload the zip file in the vault ( of amazon glacier clould storage ). the upload methods successfully executed without any exception. also return the archive id of the upload but I cant see any files in archives.
string vaultName = "test";
string archiveToUpload = #"E:\Mayur\Downloads\Zip files\search.zip";
ArchiveTransferManager manager = new ArchiveTransferManager(Amazon.RegionEndpoint.USWest2);
string archiveId = manager.Upload(vaultName, "Binari archive", archiveToUpload).ArchiveId;
I am not getting any exception methods successfully returns an archive Id
Please help me to find out what is the actual issue..
Thanks,
Mayur

Simply because The vault inventory is updated approximately once a day.
source: http://aws.amazon.com/glacier/faqs/

Related

How to invoke API which is returning zip file and store the zip to local

I have a use case where I need to invoke a GET API. Its is returning zip file.
I need to store the zip to my local disk (for example Desktop). I started with this, but no clue on how to continue.
RestTemplate templ = new RestTemplate();
byte[] downloadedBytes = templ.getForObject("https:localhost:8080/zip", byte[].class);
Can someone please suggest?

Does Azure Blob Storage supports partial content 206 by default?

I am using Azure blob storage to storage all my images and videos. I have implemented the upload and fetch functionality and it's working quite good. I am facing 1 issue while loading the videos, because when I use the url which is generated after uploading that video on Azure blob storage, it's downloading all the content first before rendering it to the user. So if the video size is 100 mb, it'll download all the 100 mb and till than user won't able to see the video.
I have done a lot of R&D and came to know that while rendering the video, I need to fetch the partial content (status 206) rather than fetching the whole video at a time. After adding the request header "Range:bytes-500", I tried to hit the blog url, but it's still downloading the whole content. So I have checked with some open source video URLs and tried to hit the video URL along with the "Range" request header and it was successfully giving 206 response status, which means it was properly giving me the partial content instead of the full video.
I read some forum and they are saying Azure storage supports the partial content concept and need to enable it from the properties. But I have checked all the options under the Azure storage account but didn't find anything to enable this functionality.
Can anyone please help me out to resolve this or if there's anything on Azure portal that I need to enable? It's something that I have been doing the R&D for this since a week now. Any help would be really appreciated.
Thank you! Stay safe.
Suppose the Accept-Ranges is not enabled, from this blog I got it needs to set the default version of the service.
Below is a sample code to implement it.
var credentials = new StorageCredentials("account name", "account key");
var account = new CloudStorageAccount(credentials, true);
var client = account.CreateCloudBlobClient();
var properties = client.GetServiceProperties();
properties.DefaultServiceVersion = "2019-07-07";
client.SetServiceProperties(properties);
Below is a return header comparison after setting the property.
Before:
After:
Assuming the video content is MPEG-4 the issue may be the media itself needs to have the moov atom position changed from the end of the file to the beginning. The browser won't render the video until it finds the moov atom in the file therefore you want to make sure the atom is at the start of the file which can be accomplished using ffmpeg with the "FastStart". Here's a good article with more detail : HERE
You just need to update your Azure Storage version. It will work automatically after the update.
Using Azure CLI
Just run:
az storage account blob-service-properties update --default-service-version 2021-08-06 -n yourStorageAccountName -g yourStorageResourceGroupName
List of avaliable versions:
https://learn.microsoft.com/en-us/rest/api/storageservices/previous-azure-storage-service-versions
To see your current version, open a file and inspect the x-ms-version header
following is the SDK I used to download the contents:
var container = new BlobContainerClient("UseDevelopmentStorage=true", "sample-container");
await container.CreateIfNotExistsAsync();
BlobClient blobClient = container.GetBlobClient(fileName);
Stream stream = new MemoryStream();
var result = await blobClient.DownloadToAsync(stream, cancellationToken: ct);
which DOES download the whole file right away! Unfortunately the solution provided in other answers seems to be referencing another SDK? So for the SDK that I use - the solution is to use the method OpenReadAsync:
long kBytesToReadAtOnce = 300;
long bytesToReadAtOnce = kBytesToReadAtOnce * 1024;
//int mbBytesToReadAtOnce = 1;
var result = await blobClient.OpenReadAsync(0, bufferSize: (int)bytesToReadAtOnce, cancellationToken: ct);
By default - it fetches 4mb of data, so you have to override the value to smaller amount if you want your app to have smaller memory footprint.
I think that internally the SDK sends the requests with the byte range already set. So all you have to do is enable the partial content support in Web API like this:
return new FileStreamResult(result, contentType)
{
EnableRangeProcessing = true,
};

Unable to use data from Google Cloud Storage in App Engine using Python 3

How can I read the data stored in my Cloud Storage bucket of my project and use it in my Python code that I am writing in App Engine?
I tried using:
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(source_blob_name)
But I am unable to figure out how to extract actual data from the code to get it in a usable form.
Any help would be appreciated.
Getting a file from a Google Cloud Storage bucket means that you are just getting an object. This concept abstract the file itself from your code. You will either need to store locally the file to perform any operation on it or depending on the extension of your file put that object inside of a file readstreamer or the method that you need to read the file.
Here you can see a code example on how to read a file from app engine:
def read_file(self, filename):
self.response.write('Reading the full file contents:\n')
gcs_file = gcs.open(filename)
contents = gcs_file.read()
gcs_file.close()
self.response.write(contents)
You have a couple of options.
content = blob.download_as_string() --> Converts the content of your Cloud Storage object to String.
blob.download_to_file(file_obj) --> Updates an existing file_obj to include the Cloud Storage object content.
blob.download_to_filename(filename) --> Saves the object in a file. On App Engine Standard environment, you can store files in /tmp/ directory.
Refer this link for more information.

Azure File storage content-type is always application/octet-stream

I'm currently having issue with Azure File storage when I build up a URL with a shared access signature (SAS) Token. The file will download in the browser, but the content-type is always application/octet-stream rather than changing to match the mime type of the file. If I put the file in Azure BLOB storage and build up a URL with a SAS Token, it sends the correct content-type for my file (image/jpeg).
I've upgraded my storage account from V1 to V2 thinking that was the problem, but it didn't fix it.
Does anyone have a clue what I could try that might get Azure File storage to return the correct content-type using a URL with SAS Token to download the file?
So far these are the only fixes for the content-type that I've found:
Use the Microsoft Azure Storage Explorer to modify the content-type string by hand. You have to right click the file and the left-click properties to get the dialog to appear.
Programmatically modify the file using Microsoft's WindowsAzure.Storage Nuget package.
Surface file download via my own web site and not allow direct access.
For me, none of these are acceptable choices. The first two can lead to mistakes down the road if a user uploads a file via the portal or Microsoft Azure Storage Explore and forgets to change the content type. I also don't want to write Azure Functions or web jobs to monitor and fix this problem.
Since blob storage does NOT have the same problems when uploading via Microsoft Azure Storage Explore or via the portal, the cost is much lower AND both work with SAS Tokens, we are moving towards blob storage instead. We do lose the ability to mount the drive to our local computers and use something like Beyond Compare to do file comparisons, but that is a disadvantage that we can live with.
If anyone has a better solution than the ones mentioned above that fixes this problem, I will gladly up-vote it. However, I think that Microsoft will have to make changes for this problem to be fixed.
When I upload a jpeg file to file share through portal, content-type is changed to application/octet-stream indeed. But I can't reproduce your download problem.
I didn't specify content-type in my SAS request uri, but the file just download as a jpeg file. Have tested in SDK(Account SAS/Stored Access Policy/SAS on file itself) or REST API, both work even without content-type.
You can try to specify the content-type using the code below.
SharedAccessFileHeaders header = new SharedAccessFileHeaders()
{
ContentDisposition = "attachment",
ContentType = "image/jpeg"
};
string sasToken = file.GetSharedAccessSignature(sharedPolicy,header);
Azure blob falls to the default value of 'application/octet-stream' if nothing is provided. To get the correct mimetypes, this is what I did with my flask app:
#app.route('/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
f = request.files['file']
mime_type = f.content_type
print (mime_type)
print (type(f))
try:
blob_service.create_blob_from_stream(container, f.filename, f,
content_settings=ContentSettings(content_type=mime_type))
except Exception as e:
print (str(e))
pass
mime_type was passed to ContentSettings to get the current mimetypes of files uploaded to azure blob.
In nodeJS:
blobService.createBlockBlobFromStream(container, blob, stream, streamLength, { contentSettings: { contentType: fileMimeType } }, callback)
where:
fileMimeType is the type of the file being uploaded
callback is your callback implementation
Reference to method used:
https://learn.microsoft.com/en-us/javascript/api/azure-storage/azurestorage.services.blob.blobservice.blobservice?view=azure-node-latest#createblockblobfromstream-string--string--stream-readable--number--createblockblobrequestoptions--errororresult-blobresult--
Check this out - Microsoft SAS Examples
If you don't want to update the content-type of your file in Azure or it's too much of a pain to update the content-type of all your existing files, you can pass the desired content-type w/ the SAS token as well. The rsct param is where you would specify the desired content-type.
e.g. - https://myaccount.file.core.windows.net/pictures/somefile.pdf?sv=2015-02-21&st=2015-07-01T08:49Z&se=2015-07-02T08:49Z&sr=c&sp=r&rscd=file;%20attachment&rsct=application%2Fpdf&sig=YWJjZGVmZw%3d%3d&sig=a39%2BYozJhGp6miujGymjRpN8tsrQfLo9Z3i8IRyIpnQ%3d
This works with java using com.microsoft.azure azure-storage library. Uploading to Shared Access Signature resource.
InputStream is = new FileInputStream(file);
CloudBlockBlob cloudBlockBlob = new CloudBlockBlob(new URI(sasUri));
cloudBlockBlob.getProperties().setContentType("application/pdf");
cloudBlockBlob.upload(is, file.length());
is.close();
For anyone looking to upload files correctly with a declared Content Type, the v12 client has changed setting Content type. You can use the ShareFileHttpHeaders parameter of file.Create
ShareFileClient file = directory.GetFileClient(fileName);
using FileStream stream = File.OpenRead(#"C:\Temp\Amanita_muscaria.jpg");
file.Create(stream.Length, new ShareFileHttpHeaders { ContentType = ContentType(fileName) });
file.UploadRange(new HttpRange(0, stream.Length),stream);
where ContentType(fileName) is a evaluation of filename, eg:
if (fileName.EndsWith(".txt")) return "text/plain";
// etc
// here you define your file content type
CloudBlockBlob cloudBlockBlob = container.GetBlockBlobReference(file.FileName);
cloudBlockBlob.Properties.ContentType = file.ContentType; //content type
I know that I'm not answering the question, but I do believe the answer is applicable. I had the same problem with a storage account that I need it to have it as a static website. Whenever I upload a blob to a container, the default type is "application/octet-stream" and because of this the index.html get downloaded instead of being displayed.
To change the file type do the following:
# Get Storage Account for its context
$storageAccount = Get-AzStorageAccount -ResourceGroupName <Resource Group Name> -Name <Storage Account Name>
# Get Blobs inside container of storage account
$blobs = Get-AzStorageBlob -Context $storageAccount.Context -Container <Container Name>
foreach ($blob in $blobs) {
$CloudBlockBlob = [Microsoft.Azure.Storage.Blob.CloudBlockBlob] $blob.ICloudBlob
$CloudBlockBlob.Properties.ContentType = <Desired type as string>
$CloudBlockBlob.SetProperties()
}
Note: for Azure File storage you might wanna change the library to [Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob]
I have not tried this, but ideally, you could use ClientOptions to specify a different header. It'd would look something like this:
ClientOptions options = new ClientOptions();
HttpHeader httpHeaders = new HttpHeader("Content-Type", "application/pdf");
options.setHeaders(Collections.singleton(httpHeaders));
blobClient = new BlobClientBuilder()
.endpoint(<SAS-URL>)
.blobName("hello")
.clientOptions(options)
.buildClient();
This way we can provide the our own mime_type as 'content-type'
with open(file.path,"rb") as data:
#blob_client.upload_blob(data)
mime_type =mimetypes.MimeTypes().guess_type(file.name)[0]
blob_client.upload_blob(data,content_type=mime_type)
print(f'{file.name}' " uploaded to blob storage")
Based on this answer: Twong answer
Example if you are using .NET (C#) API to proxy/generate SAS url from ShareFileClient (ShareFileClient class description):
if (downloadClient.CanGenerateSasUri)
{
var sasBuilder = new ShareSasBuilder(ShareFileSasPermissions.Read, DateTimeOffset.Now.AddDays(10))
{
ContentType = "application/pdf",
ContentDisposition = "inline"
};
return downloadClient.GenerateSasUri(sasBuilder);
}
Above example setup 10 days long token for pdf file which will be open into new browser tab (especially on Apple iOS).
Solution in Java is to specify the content-type when generating the signature image url:
blobServiceSasSignatureValues.setContentType("image/jpeg");

Lucene.NET and storing data on Azure Blob Storage

The question I am asking is specifically because I don't want to use AzureDirectory project. I am just trying something on my own.
cloudStorageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=http;AccountName=xxxx;AccountKey=xxxxx");
blobClient=cloudStorageAccount.CreateCloudBlobClient();
List<CloudBlobContainer> containerList = new List<CloudBlobContainer>();
IEnumerable<CloudBlobContainer> containers = blobClient.ListContainers();
if (containers != null)
{
foreach (var item in containers)
{
Console.WriteLine(item.Uri);
}
}
/* Used to test connectivity
*/
//state the file location of the index
string indexLocation = containers.Last().Name.ToString();
Lucene.Net.Store.Directory dir =
Lucene.Net.Store.FSDirectory.Open(indexLocation);
//create an analyzer to process the text
Lucene.Net.Analysis.Analyzer analyzer = new
Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_30);
//create the index writer with the directory and analyzer defined.
bool findexExists = Lucene.Net.Index.IndexReader.IndexExists(dir);
Lucene.Net.Index.IndexWriter indexWritr = new Lucene.Net.Index.IndexWriter(dir, analyzer,!findexExists, Lucene.Net.Index.IndexWriter.MaxFieldLength.UNLIMITED);
//create a document, add in a single field
Lucene.Net.Documents.Document doc = new Lucene.Net.Documents.Document();
string path="D:\\try.html";
TextReader reader = new FilterReader("D:\\try.html");
doc.Add(new Lucene.Net.Documents.Field("url",path,Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.NOT_ANALYZED));
doc.Add(new Lucene.Net.Documents.Field("content",reader.ReadToEnd().ToString(),Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.ANALYZED));
indexWritr.AddDocument(doc);
indexWritr.Optimize();
indexWritr.Commit();
indexWritr.Close();
Now the issue is after indexing is completed I am not able to see any files created inside the container. Can anybody help me out?
You're using the FSDirectory there, which is going to write files to the local disk.
You're passing it a list of containers in blob storage. Blob storage is a service made available over a REST API, and is not addressable directly from the file system. Therefore the FSDirectory is not going to be able to write your index to storage.
Your options are :
Mount a VHD disk on the machine, and store the VHD in blob storage. There are some instructions on how to do this here: http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/15/mount-a-page-blob-vhd-in-any-windows-azure-vm-outside-any-web-worker-or-vm-role.aspx
Use the Azure Directory, which you refer to in your question. I have rebuilt the AzureDirectory against the latest storage SDK: https://github.com/richorama/AzureDirectory
Another alternative for people looking around - I wrote up a directory that uses the azure shared cache (preview) which can be an alternative for AzureDirectory (albeit for bounded search sets)
https://github.com/ajorkowski/AzureDataCacheDirectory

Resources