Azure Custom Vision SDK ImageRegionCreateEntry doesn't seem to set region - azure

I use the Azure Custom Vision SDK ImageRegionCreateEntry Class in Object Detection mode to set the region of each image to be the full image and the method shows success in the code (in full):
string dir = "G:\\folderLocation";
string[] files = Directory.GetFiles(dir);
if (files[i].ToLower().Contains(tagList[y].Name.ToLower()))
{
using (var stream = new MemoryStream(File.ReadAllBytes(files[i])))
{
var createImage = trainingApi.CreateImagesFromData(CV_Project_Guid, stream, new List<Guid>() { thisTag.Id });
//set the region for an image that is 300x400
ImageRegionCreateEntry thisImgEntry = new ImageRegionCreateEntry(createImage.Images[0].Image.Id, createImage.Images[0].Image.Tags[0].TagId, 0, 150, 150, 150);
I check each image in the portal and it does not show the Box of the region limits:
whilst if I set the region manually in the portal and revisit the image it shows the boxed region:
I find missleading when setting the region with the sdk that the tree is gray as if the tree has been identified when the "Region Shown" toggle is On.
Also, if I set the region with the SDK for each image using the full boundries of the image in pixels, the training fails. If I set the region manually, the training succeeds.
Therefore I think the region is not set correctly by the sdk. Can someone please confirm whether when using the SDK and if ImageRegionCreateEntry succeeds, revisiting each image in the portal would show the bounding box?

[0,0,1,1] you cannot see it is because it is exactly the image boundaries, and [0,1,1,1] is drawing outside the bounding. you can try[0.25, 0.25, 0.5, 0.5] it should draw a box at the center. Please follow the doc here to understand how the coordinate works https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/quickstarts/object-detection?tabs=visual-studio&pivots=programming-language-csharp#upload-and-tag-images

Related

Azure Application Insight. Custom attribute length restriction

I'm using Azure App Insight as a logging tool and store log data by the following code:
private void SendTrace(LoggingEvent loggingEvent)
{
loggingEvent.GetProperties();
string message = "TestMessage";
var trace = new TraceTelemetry(message)
{
SeverityLevel = SeverityLevel.Information
};
trace.Properties.Add("TetstKey", "TestValue");
var telemetryClient = new TelemetryClient();
telemetryClient.Context.InstrumentationKey = this.InstrumentationKey;
telemetryClient.Track(trace);
}
everything works well. I see logged record in App insight as well as in App insight analytics (in trace table). My custom attributes are written in special app insight row section - customDimensions. For example, the above code will add new attribute with "TestKey" key and "TestValue" value into customDimensions section.
But when I try to write some big text (for example JSON document with more then 15k letters) I still can do it without any exceptions, but the writable text will be cut off after some document length. As the result, the custom attribute value in customDimensions section will be cropped too and will have only first part of document.
As I understand there is the restriction for max text length which is allowed to be written in app insight custom attribute.
Could someone know how can I get around with this?
The message has the highest allowed limit of 32768. For items in the property collection, value has max limit of 8192.
So you can try one of the following options:
Use message field to the fullest by putting the big text there.
Split the data into multiple, and add to properties collection separately.
eg:
trace.Properties.Add("key_part1", "Bigtext1_upto8192");
trace.Properties.Add("key_part2", "Bigtext2_upto8192");
Reference: https://github.com/MicrosoftDocs/azure-docs/blob/master/includes/application-insights-limits.md

Azure Blob Storage : buffer cannot be null

I'm following the source code from Xamarin for Azure : FileUploader for Image Upload , and I try to run the app, the error shown the buffer cannot be null.
as shown below:
I suggest, Creating "MemoryStream object" and assign using stream.CopyTo(), return byte[] using "memorystream object".ToArray()
If you don't want to change the code as above,
1. Check the value of "stream.Length"
2. Add some padding (extras) to it e.g. byte[] buffer = new byte[stream.Length+10]
3. Also, check if you can read the stream using .CanRead()

Clear images from RenderWindowControl C# Activize.Net

I am using RenderWindowControl in order to display Dicom Series.
This way:
string folder = path;//#"C:\VTKdata";
vtkDICOMImageReader reader = vtkDICOMImageReader.New();
reader.SetDirectoryName(folder);
reader.Update();
// Visualize
_ImageViewer1 = vtkImageViewer2.New();
_ImageViewer1.SetInputConnection(reader.GetOutputPort());
_ImageViewer1.SetRenderWindow(renderWindow);
_ImageViewer1.SetSlice(_MinSlice1);
_ImageViewer1.Render();
I need to be able to delete all images displayed by the control, before the user reloads a new series.
Any help?
Thanks.
Clear the renderwindow by
_ImageViewer1.SetRenderWindow(null);
renderWindow.Render();
and simply connect it again if new data is available
_ImageViewer1.SetRenderWindow(renderWindow);
_ImageViewer1.Render();

Directx 11.2 mipmaps with SharpDX?

I'm using the 11.2 compatible build of SharpDX and have rendering going up well so far, however i'm starting to test things out with large textures and would need mipmaping to avoid the ugly artifacts of higher than screen resolution textures.
From what i understand if i want the full set of mipmap levels i need to set MipLevels to 0 in my texture creation, however, changing the MipLevels parameter from 1 (what it was and works) to 0 (my goal) causes an exception with invalid parameter on the texture instantiation line.
The error has to be at that point or before (crashed before it reaches any rendering and at the step of declaration).
Here's how i'm declaring my texture state :
new SharpDX.Direct3D11.Texture2DDescription()
{
Width = bitmapSource.Size.Width,
Height = bitmapSource.Size.Height,
ArraySize = 1,
BindFlags = SharpDX.Direct3D11.BindFlags.ShaderResource,
Usage = SharpDX.Direct3D11.ResourceUsage.Immutable,
CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.None,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
MipLevels = 1, // This works, but if i change it to 0, i get an argument invalid exception
OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
}
Since the texture is immutable and being created with a full MIP chain you need to provide initial data for every mip in the chain. I assume you are only providing data for mip 0?
EDIT:
A similar question is asked here: http://www.gamedev.net/topic/605521-mipmap-dx11/
You have a few different options:
1) Generate the mips offline (perhaps store your textures in a DDS, which supports mips) and provide an array of DataRectangles, one for each mip.
2) Remove the Immutable usage flag, use Default instead, don't provide any initial data but instead use something like Map or UpdateSubresource to fill in Mip 0 after it has been created. Once mip 0 is populated, you can call GenerateMips on the DeviceContext so long as the texture was created with the D3D11_RESOURCE_MISC_GENERATE_MIPS MiscFlag, this will populate all other mips with the correct downsampled data.
3) A third approach would be to do something similar to Option 2, but instead you could provide a dummy set of data for all but the first mip and thus avoid the need to call Map or UpdateSubresource. However you will still have to call GenerateMips on the DeviceContext.

save d3.map() to be able to restore it later

I am creating a picture using d3, the picture is composed from node and link (arrow):
var graph = { nodes: d3.map(), edges: d3.map() };
Is there way to save nodes and edges using node.js to be able to restore it later?

Resources