I am aware that we can use CloudBlockblob.putBlock and CloudBlockBlob.PutBlockList to upload in chunks but these methods do not have lease id parameter.
For this can i form the httpwebrequest with header "x-ms-lease-id" and attach to CloudBlockblob.putBlock and CloudBlockBlob.PutBlockList
Hi Gaurav,I could not big comment to your response hence adding it.
I tried with BlobRequest.PutBlock and Blobrequest.PutBlock with following code:
`for (int idxThread = 0; idxThread < numThreads; idxThread++)
{
tasks.Add(Task.Factory.StartNew(() =>
{
KeyValuePair blockIdAndLength;
while (true)
{
lock (queue)
{
if (queue.Count == 0)
break;
blockIdAndLength = queue.Dequeue();
}
byte[] buff = new byte[blockIdAndLength.Value];
//copying chunks into buff from inputbyte array
Array.Copy(buffer, blockIdAndLength.Key * (long)blockIdAndLength.Value, buff, 0, blockIdAndLength.Value);
// Upload block.
string blockName = Convert.ToBase64String(BitConverter.GetBytes(
blockIdAndLength.Key));
//string blockIdString = Convert.ToBase64String(ASCIIEncoding.ASCII.GetBytes(string.Format("BlockId{0}", blockIdAndLength.Key.ToString("0000000"))));
/// For small files like 100 KB it works files,for large files like 10 MB,it will end up uploading only 2-3 MB
/// //Is there any better way to implement Uploading in chunks and leasing.
///
string url = blob.Uri.ToString();
if (blob.ServiceClient.Credentials.NeedsTransformUri)
{
url = blob.ServiceClient.Credentials.TransformUri(url);
}
var req = BlobRequest.Put(new Uri(url), 90, new BlobProperties(), BlobType.BlockBlob, leaseId, 0);
using (Stream writer = req.GetRequestStream())
{
writer.Write(buff,0,buff.Length);
}
blob.ServiceClient.Credentials.SignRequest(req);
req.GetResponse().Close();
}
}));
}
// Wait for all threads to complete uploading data.
Task.WaitAll(tasks.ToArray());`
This does not work for multiple chunks..Could you please provide your inputs
I don't think you can. However take a look at BlobRequest class in Microsoft.WindowsAzure.StorageClient.Protocol namespace. It has PutBlock and PutBlockList functions which allows you to specify LeaseId.
Hope this helps.
Related
I'm trying to upload a file size of 230GB into azure block blob with the following code
private void uploadFile(FileObject srcFile, FileObject destFile) throws Exception {
try {
BlobClient destBlobClient = blobContainerClient.getBlobClient("destFilename");
long blockSize = 4 * 1024 * 1024 // 4 MB
ParallelTransferOptions opts = new ParallelTransferOptions()
.setBlockSizeLong(blockSize)
.setMaxConcurrency(5);
BlobRequestConditions requestConditions = new BlobRequestConditions();
try (BlobOutputStream bos = destBlobClient.getBlockBlobClient().getBlobOutputStream(
opts, null, null, null, requestConditions);
InputStream is = srcFile.getContent().getInputStream()) {
byte[] buffer = new byte[(int) blockSize];
int i = 0;
for (int len; (len = is.read(buffer)) != -1; ) {
bos.write(buffer, 0, len);
}
}
}
finally {
destFile.close();
srcFile.close();
}
}
Since,I am explicitly setting block size 4MB for each write operation I'm in a assumption that each write block is considered as single block in azure. But which is not the case.
For the above example 230GB file the write operation was executed 58880 times and the file got uploaded successfully.
Can someone please explain me more about how blocks are splits internally in azure and let help me to understand better.
Thanks in advance
I am exploring Microsoft Computer Vision's Read API (asyncBatchAnalyze) for extracting text from images. I found some sample code on Microsoft site to extract text from images asynchronously.It works in following way:
1) Submit image to asyncBatchAnalyze API.
2) This API accepts the request and returns a URI.
3) We need to poll this URI to get the extracted data.
Is there any way in which we can trigger some notification (like publishing an notification in AWS SQS or similar service) when asyncBatchAnalyze is done with image analysis?
public class MicrosoftOCRAsyncReadText {
private static final String SUBSCRIPTION_KEY = “key”;
private static final String ENDPOINT = "https://computervision.cognitiveservices.azure.com";
private static final String URI_BASE = ENDPOINT + "/vision/v2.1/read/core/asyncBatchAnalyze";
public static void main(String[] args) {
CloseableHttpClient httpTextClient = HttpClientBuilder.create().build();
CloseableHttpClient httpResultClient = HttpClientBuilder.create().build();;
try {
URIBuilder builder = new URIBuilder(URI_BASE);
URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/octet-stream");
request.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
String image = "/Users/xxxxx/Documents/img1.jpg";
File file = new File(image);
FileEntity reqEntity = new FileEntity(file);
request.setEntity(reqEntity);
HttpResponse response = httpTextClient.execute(request);
if (response.getStatusLine().getStatusCode() != 202) {
HttpEntity entity = response.getEntity();
String jsonString = EntityUtils.toString(entity);
JSONObject json = new JSONObject(jsonString);
System.out.println("Error:\n");
System.out.println(json.toString(2));
return;
}
String operationLocation = null;
Header[] responseHeaders = response.getAllHeaders();
for (Header header : responseHeaders) {
if (header.getName().equals("Operation-Location")) {
operationLocation = header.getValue();
break;
}
}
if (operationLocation == null) {
System.out.println("\nError retrieving Operation-Location.\nExiting.");
System.exit(1);
}
/* Wait for asyncBatchAnalyze to complete. In place of this wait, can we trigger any notification from Computer Vision when the extract text operation is complete?
*/
Thread.sleep(5000);
// Call the second REST API method and get the response.
HttpGet resultRequest = new HttpGet(operationLocation);
resultRequest.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
HttpResponse resultResponse = httpResultClient.execute(resultRequest);
HttpEntity responseEntity = resultResponse.getEntity();
if (responseEntity != null) {
String jsonString = EntityUtils.toString(responseEntity);
JSONObject json = new JSONObject(jsonString);
System.out.println(json.toString(2));
}
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
There is no notification / webhook mechanism on those asynchronous operations.
The only thing that I can see right know is to change the implementation you mentioned by using a while condition which is checking regularly if the result is there or not (and a mechanism to cancel waiting - based on maximum waiting time or number of retries).
See sample in Microsoft docs here, especially this part:
// If the first REST API method completes successfully, the second
// REST API method retrieves the text written in the image.
//
// Note: The response may not be immediately available. Text
// recognition is an asynchronous operation that can take a variable
// amount of time depending on the length of the text.
// You may need to wait or retry this operation.
//
// This example checks once per second for ten seconds.
string contentString;
int i = 0;
do
{
System.Threading.Thread.Sleep(1000);
response = await client.GetAsync(operationLocation);
contentString = await response.Content.ReadAsStringAsync();
++i;
}
while (i < 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1);
if (i == 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1)
{
Console.WriteLine("\nTimeout error.\n");
return;
}
// Display the JSON response.
Console.WriteLine("\nResponse:\n\n{0}\n",
JToken.Parse(contentString).ToString());
I am running the Portable Device API to automatically get Photos from a connected Smart Phone. I have it all transferring correctly. The code that i use is that Standard DownloadFile() routine:
public PortableDownloadInfo DownloadFile(PortableDeviceFile file, string saveToPath)
{
IPortableDeviceContent content;
_device.Content(out content);
IPortableDeviceResources resources;
content.Transfer(out resources);
PortableDeviceApiLib.IStream wpdStream;
uint optimalTransferSize = 0;
var property = new _tagpropertykey
{
fmtid = new Guid(0xE81E79BE, 0x34F0, 0x41BF, 0xB5, 0x3F, 0xF1, 0xA0, 0x6A, 0xE8, 0x78, 0x42),
pid = 0
};
resources.GetStream(file.Id, ref property, 0, ref optimalTransferSize, out wpdStream);
System.Runtime.InteropServices.ComTypes.IStream sourceStream =
// ReSharper disable once SuspiciousTypeConversion.Global
(System.Runtime.InteropServices.ComTypes.IStream)wpdStream;
var filename = Path.GetFileName(file.Name);
if (string.IsNullOrEmpty(filename))
return null;
FileStream targetStream = new FileStream(Path.Combine(saveToPath, filename),
FileMode.Create, FileAccess.Write);
try
{
unsafe
{
var buffer = new byte[1024];
int bytesRead;
do
{
sourceStream.Read(buffer, 1024, new IntPtr(&bytesRead));
targetStream.Write(buffer, 0, 1024);
} while (bytesRead > 0);
targetStream.Close();
}
}
finally
{
Marshal.ReleaseComObject(sourceStream);
Marshal.ReleaseComObject(wpdStream);
}
return pdi;
}
}
There are two problems with this standard code:
1) - when the images are saves to the windows machine, there is no EXIF information. this information is what i need. how do i preserve it?
2) the saved files are very bloated. for example, the source jpeg is 1,045,807 bytes, whilst the downloaded file is 3,942,840 bytes!. it is similar to all of the other files. I would of thought that the some inside the unsafe{} section would output it byte for byte? Is there a better way to transfer the data? (a safe way?)
Sorry about this. it works fine.. it is something else that is causing these issues
I tried this:
<awe:WebControl x:Name="webBrowser" Cursor="None" Source="http://example.com/"/>
but the cursor still shows.
I figured that I could alter the CSS of the page by adding the following line:
*{
cursor: none;
}
But, is there a solution for when I don't have the access to the actual page that I'm showing?
You can use a ResouceInterceptor and manipulate the page on the fly to insert custom CSS.
EDIT:
The following implementation should do the job. (It assumes there is a text.css file)
class ManipulatingResourceInterceptor : IResourceInterceptor
{
public ResourceResponse OnRequest(ResourceRequest request)
{
Stream stream = null;
//do stream manipulation
if (request.Url.ToString() == "http://your.web.url/test.css")
{
WebRequest myRequest;
myRequest = WebRequest.Create(request.Url);
Stream webStream = myRequest.GetResponse().GetResponseStream();
StreamReader webStreamReader = new StreamReader(webStream);
string webStreamContent = webStreamReader.ReadToEnd();
stream = webStream;
string extraContent = "*{cursor: none;}";
webStreamContent += extraContent;
byte[] responseBuffer = Encoding.UTF8.GetBytes(webStreamContent);
// Initialize unmanaged memory to hold the array.
int responseSize = Marshal.SizeOf(responseBuffer[0]) * responseBuffer.Length;
IntPtr pointer = Marshal.AllocHGlobal(responseSize);
try
{
// Copy the array to unmanaged memory.
Marshal.Copy(responseBuffer, 0, pointer, responseBuffer.Length);
return ResourceResponse.Create((uint)responseBuffer.Length, pointer, "text/css");
}
finally
{
// Data is not owned by the ResourceResponse. A copy is made
// of the supplied buffer. We can safely free the unmanaged memory.
Marshal.FreeHGlobal(pointer);
stream.Close();
}
}
return null;
}
public bool OnFilterNavigation(NavigationRequest request)
{
return false;
}
}
When my images are being loaded from my database on my web server, I see the following error:
A generic error occurred in GDI+. at
System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder,
EncoderParameters encoderParams) at
System.Drawing.Image.Save(Stream stream, ImageFormat format) at
MyWeb.Helpers.ImageHandler.ProcessRequest(HttpContext context)
All my code is attempting to do is load the image, can anybody take a look and let me know what I'm doing wrong?
Note - This works if I test it on my local machine, but not when I deploy it to my web server.
public void ProcessRequest(HttpContext context)
{
context.Response.Clear();
if (!String.IsNullOrEmpty(context.Request.QueryString["imageid"]))
{
int imageID = Convert.ToInt32(context.Request.QueryString["imageid"]);
int isThumbnail = Convert.ToInt32(context.Request.QueryString["thumbnail"]);
// Retrieve this image from the database
Image image = GetImage(imageID);
// Make it a thumbmail if requested
if (isThumbnail == 1)
{
Image.GetThumbnailImageAbort myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
image = image.GetThumbnailImage(200, 200, myCallback, IntPtr.Zero);
}
context.Response.ContentType = "image/png";
// Save the image to the OutputStream
image.Save(context.Response.OutputStream, ImageFormat.Png);
}
else
{
context.Response.ContentType = "text/html";
context.Response.Write("<p>Error: Image ID is not valid - image may have been deleted from the database.</p>");
}
}
The error occurs on the line:
image.Save(context.Response.OutputStream, ImageFormat.Png);
UPDATE
I've changed my code to this, bit the issue still happens:
var db = new MyWebEntities();
var screenshotData = (from screenshots in db.screenshots
where screenshots.id == imageID
select new ImageModel
{
ID = screenshots.id,
Language = screenshots.language,
ScreenshotByte = screenshots.screen_shot,
ProjectID = screenshots.projects_ID
});
foreach (ImageModel info in screenshotData)
{
using (MemoryStream ms = new MemoryStream(info.ScreenshotByte))
{
Image image = Image.FromStream(ms);
// Make it a thumbmail if requested
if (isThumbnail == 1)
{
Image.GetThumbnailImageAbort myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
image = image.GetThumbnailImage(200, 200, myCallback, IntPtr.Zero);
}
context.Response.ContentType = "image/png";
// Save the image to the OutputStream
image.Save(context.Response.OutputStream, ImageFormat.Png);
} }
Thanks.
Probably for the same reason that this guy was having problems - because the for a lifetime of an Image constructed from a Stream, the stream must not be destroyed.
So if your GetImage function constructs the returned image from a stream (e.g. a MemoryStream) and then closes the stream before returning the image then the above will fail. My guess is that your GetImage looks a tad like this:
Image GetImage(int id)
{
byte[] data = // Get data from database
using (MemoryStream stream = new MemoryStream(data))
{
return Image.FromStream(data);
}
}
If this is the case then try having GetImage return the MemoryStream (or possibly the byte array) directrly so that you can create the Image instance in your ProcessRequest method and dispose of the stream only when the processing of that image has completed.
This is mentioned in the documentation but its kind of in the small print.