Questions about the second parameter of glBufferData in webgl - graphics

I been reading the famous webgl tutorial https://webgl2fundamentals.org/webgl and learning how to use bufferData to put data into the buffer. The tutorial uses bufferData in the form like this extensively
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
the second parameter here is the actual array or data we want to send to the buffer on GPU. However I came across this new usage of the API today.
gl.bufferData(gl.ARRAY_BUFFER, 8*maxNumVertices, gl.STATIC_DRAW);
Here the second parameter indicates the size of of the buffer.
So I was confused by this. I looked this API up on MDN https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/bufferData and it says
// WebGL1:
void gl.bufferData(target, size, usage);
void gl.bufferData(target, ArrayBuffer? srcData, usage);
void gl.bufferData(target, ArrayBufferView srcData, usage);
// WebGL2:
void gl.bufferData(target, ArrayBufferView srcData, usage, srcOffset, length);
Does this mean for webgl1.0, we can either pass the actual array of data or the size of the buffer to the API as the second parameter. However for WebGL2.0 we can only pass the actual array of data to the API?
I am still not clear on this. Please help.

WebGL2 adds to the WebGL1 api so WebGL2 has 4 versions of gl.bufferData, the 3 from WebGL1 and one new one.
They are
set by size
void gl.bufferData(target, size, usage);
set with untyped ArrayBuffer
void gl.bufferData(target, ArrayBuffer? srcData, usage);
set with an ArrayBufferView like Uint8Array, Float32Array and the other array buffer view types.
void gl.bufferData(target, ArrayBufferView srcData, usage);
set with an ArrayBufferView with an offset and length
// WebGL2:
void gl.bufferData(target, ArrayBufferView srcData, usage, srcOffset, length);
The last one was added arguably for WebAssembly. The problem was if you had large ArrayBufferView and only wanted to upload a portion you had to create a new ArrayBufferView on to the same buffer that covered the portion you want to upload. Even though an ArrayBufferView on the same ArrayBuffer is relatively cheap there's still an allocation for the view which will eventually have to garbage collected. Adding the new version of gl.bufferData removes that issue. You don't have to create a temporary ArrayBufferView just to upload a portion of an ArrayBuffer.

Related

Rendering a ID3D11Texture2D into a SkImage (Skia)

I'm currently trying to create an interop layer to render my render target texture into a Skia SkImage. This is being done to facilitate rendering from my graphics API into Avalonia.
I've managed to piece together enough code to get everything running without any errors (at least, none that I can see), but when I draw the SkImage I see nothing but a black image.
Of course, these things are easier to describe with code:
private EglPlatformOpenGlInterface _platform;
private AngleWin32EglDisplay _angleDisplay;
private readonly int[] _glTexHandle = new int[1];
IDrawingContextImpl context // <-- From Avalonia
_platform = (EglPlatformOpenGlInterface)platform;
_angleDisplay = (AngleWin32EglDisplay)_platform.Display;
IntPtr d3dDevicePtr = _angleDisplay.GetDirect3DDevice();
// Device5 is from SharpDX.
_d3dDevice = new Device5(d3dDevicePtr);
// Texture.GetSharedHandle() is the shared handle of my render target.
_eglTarget = _d3dDevice.OpenSharedResource<Texture2D>(_target.Texture.GetSharedHandle());
// WrapDirect3D11Texture calls eglCreatePbufferFromClientBuffer.
_glSurface = _angleDisplay.WrapDirect3D11Texture(_platform, _eglTarget.NativePointer);
using (_platform.PrimaryEglContext.MakeCurrent())
{
_platform.PrimaryEglContext.GlInterface.GenTextures(1, _glTexHandle);
}
var fbInfo = new GRGlTextureInfo(GlConsts.GL_TEXTURE_2D, (uint)_glTexHandle[0], GlConsts.GL_RGBA8);
_backendTarget = new GRBackendTexture(_target.Width, _target.Height, false, fbInfo);
using (_platform.PrimaryEglContext.MakeCurrent())
{
// Here's where we find the gl surface to our texture object apparently.
_platform.PrimaryEglContext.GlInterface.BindTexture(GlConsts.GL_TEXTURE_2D, _glTexHandle[0]);
EglBindTexImage(_angleDisplay.Handle, _glSurface.DangerousGetHandle(), EglConsts.EGL_BACK_BUFFER);
_platform.PrimaryEglContext.GlInterface.BindTexture(GlConsts.GL_TEXTURE_2D, 0);
}
// context is a GRContext
_skiaSurface = SKImage.FromTexture(context, _backendTarget, GRSurfaceOrigin.BottomLeft, SKColorType.Rgba8888, SKAlphaType.Premul);
// This clears my render target (obviously). I should be seeing this when I draw the image right?
_target.Clear(GorgonColor.CornFlowerBlue);
canvas.DrawImage(_skiaSurface, new SKPoint(320, 240));
So, as far as I can tell, this should be working. But as I said before, it's only showing me a black image. It's supposed to be cornflower blue. I've tried calling Flush on the ID3D11DeviceContext, but I'm still getting the black image.
Anyone have any idea what I could be doing wrong?

HoloLens spatial mapping.SpatialSurfaceMesh update problem. C++/winrt

im working on the spatial mapping processing for my HoloLens project.
Somehow calling "SpatialSurfaceMesh::TryComputeLatestMeshAsync" keeps returning the same mesh data overtime.
Is there another process involved updating the observer?
void SpatialMapping::AddOrUpdateSurface(winrt::Windows::Perception::Spatial::SpatialCoordinateSystem const& coordinateSystem)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
SpatialBoundingBox axisAlignedBoundingBox =
{
{ 0.f, 0.f, 0.f },
{ 50.f, 50.f, 50.f },
};
SpatialBoundingVolume bounds = SpatialBoundingVolume::FromBox(coordinateSystem, axisAlignedBoundingBox);
m_surfaceObserver.SetBoundingVolume(bounds);
m_surfaceObserver.ObservedSurfacesChanged(
winrt::Windows::Foundation::TypedEventHandler
<SpatialSurfaceObserver, winrt::Windows::Foundation::IInspectable>
({ this, &SpatialMapping::Observer_ObservedSurfacesChanged })
);
}
void SpatialMapping::Observer_ObservedSurfacesChanged(winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceObserver const& sender
, winrt::Windows::Foundation::IInspectable const& object)
{
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
const auto mapContainingSurfaceCollection = sender.GetObservedSurfaces();
// Process surface adds and updates?.
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
InsertAsync(id, info);
}
}
}
Concurrency::task<void> SpatialMapping::InsertAsync(winrt::guid /*const&*/ id, winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceInfo /*const&*/ newSurfaceInfo)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
return concurrency::create_task([this, id, newSurfaceInfo]
{
const auto surfaceMesh = newSurfaceInfo.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions).get();
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, surfaceMesh);
});
}
Generation works, Update does not
Manuel attempt same problem:
winrt::Windows::Foundation::IAsyncAction SpatialMapping::CollectSurfacesManuel()
{
const auto mapContainingSurfaceCollection = m_surfaceObserver.GetObservedSurfaces();
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
auto mesh{ co_await info.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions) };
{
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, mesh);
}
}
}
MVCE:
Create a New Project with the template
"Holographic DirectX 11 App (UWP) C++/WinRT)"
Add the files:
https://github.com/lpnxDX/HL_MVCE_SpatialSurfaceMeshUpdateProblem.git
Replace m_main in AppView.h
We did some research and have some thoughts about your question right now, let me explain the findings
Has your Observer_ObservedSurfacesChanged method triggered exactly? Adding output statements or breakpoints can help you check it. Since SurfaceObserver should be always available, usually we need to check the availability of surfaceObserver in each frame and recreate a new one when necessary, sample code snippt please see here.
Have you set m_surfaceMeshOptions? It is not visible in the code you posted. If it is missing, you can configure it with the following statement:
surfaceMeshOptions-> IncludeVertexNormals = true;
Microsoft provided the Holographic spatial mapping sample, shows how to acquire spatial mapping data from Windows Perception in real-time. it is similar to your needs, to narrow down issue if it's an issue with your code, please try to check and run this example on your device
If after the above steps, you still can't solve the problem, could you provide an
MVCE, so that we can locate the issue or find a solution? Be careful to remove any privacy-related or other business function codes.
Your code has the following problems:
TryComputeLatestMeshAsync should be called from Observer_ObservedSurfacesChanged not from concurrency::create_task
TryComputeLatestMeshAsync return mesh with matrix, vertices and indices. Indices should be stored to a safe location at the first run, they don't change later. Vertices and matrix should copied as they returns. You shouldn't save the mesh itself, because its data updates from various threads.
ObservedSurfacesChanged shouldn't be called every frame. This is long running function.
Maybe it has something more. I would recommend to start from the sample mentioned earlier.

How to hide code from editor using a projection

I am working on a custom language service. Code files of the language contain both metadata and source code, whereby metadata is stored as a header (at the beginning of the file; this requirement can´t be changed). The language service provides two logical views; one for the code and another one allowing to edit the metadata.
Now, the problem is that I want to hide the metadata from the code editor view - and I thought implementing it using projection is the way to go, but the Visual Studio SDK does neither contain any examples nor a good explanation on how to implement such functionality. Just a short explanation about the concept is given at: Inside the Editor, Projection
I also found a language service project at Codeplex: Python Tools for Visual Studio, which has Django support (in this project projection buffers are used to mix HTML markup and Python code), but this seems to be a different scenario...
This is what I tried...
Within my editor factory, I create an IVsTextLines instance that is used by the code editor view using the following code...
private IVsTextLines GetTextBuffer(IntPtr docDataExisting)
{
if (docDataExisting == IntPtr.Zero)
{
Type t = typeof(IVsTextLines);
Guid clsId = typeof(VsTextBufferClass).GUID;
Guid interfaceId = t.GUID;
IVsTextLines bufferInstance;
if ((bufferInstance = this.Package.CreateInstance(ref clsId, ref interfaceId, t) as IVsTextLines) != null)
{
object oleServiceProvider = this.GetService(typeof(IServiceProvider));
var withSite = (IObjectWithSite)bufferInstance;
withSite.SetSite(oleServiceProvider);
return bufferInstance;
}
}
return null;
}
From what I´ve seen so far is that the IVsTextLines interface is not compatible with IProjectionBuffer, ITextBuffer and so on. I learned from the Python Tools project that I can use a IVsEditorAdaptersFactoryService to get an ITextBuffer instance from the IVsTextLines object; the adapter factory service (as well as some other required service instances) gets injected into my editor factory via MEF...
IVsTextBuffer textBuffer = this.GetTextBuffer(...);
ITextBuffer documentBuffer = this.editorAdaptersFactoryService.GetDocumentBuffer(textBuffer);
The ITextBuffer can be used as a source buffer for projections... my idea is to create an elision buffer having to spans; one for the metadata and one for the code, whereby the metadata span will be elided which makes the code disapear, and use this elision buffer as the projection. This is what I tried...
private ITextBuffer CreateBuffer(
IProjectionEditResolver editResolver,
ITextBuffer documentBuffer)
{
IContentType contentType = this.ContentTypeRegistryService.GetContentType("...");
ITextSnapshot currentSnapshot = documentBuffer.CurrentSnapshot;
string text = currentSnapshot.GetText(0, currentSnapshot.Length);
TextRange textRange = Parser.GetMetadataTextRange(text); // parse code; return a text range representing the code to elide
ITrackingSpan elideSpan = currentSnapshot.CreateTrackingSpan(textRange.Start, textRange.Length, SpanTrackingMode.EdgeExclusive);
int offset = textRange.Start + textRange.Length;
ITrackingSpan expandSpan = currentSnapshot.CreateTrackingSpan(offset, currentSnapshot.Length - offset, SpanTrackingMode.EdgeInclusive);
SnapshotSpan elide = elideSpan.GetSpan(currentSnapshot);
SnapshotSpan expand = expandSpan.GetSpan(currentSnapshot);
var snapshotSpans = new List<SnapshotSpan>
{
elide,
expand
};
var spanCollection = new NormalizedSnapshotSpanCollection(snapshotSpans);
const ElisionBufferOptions BufferOptions = ElisionBufferOptions.FillInMappingMode;
IElisionBuffer buffer = this.ProjectionBufferFactoryService.CreateElisionBuffer(editResolver, spanCollection, BufferOptions, contentType);
buffer.ModifySpans(new NormalizedSpanCollection(elide), new NormalizedSpanCollection(expand));
return buffer;
}
Once the elision buffer is created, I use the SetDataBuffer method to connect it with the editor...
ITextBuffer buffer = this.CreateBuffer(..., documentBuffer);
this.editorAdaptersFactory.SetDataBuffer(textBuffer, buffer);
This works somehome; at least the metadata code disapears from the code editor, but as soon as try to edit the code, I got an InvalidOperationException, which is not the case, if I create the elision buffer with just a single span, or if I don´t hide the elideSpan.
The following error is reported to the ActivityLog:
System.InvalidOperationException: Shim buffer length mismatch. Document=11594 Surface=11577 Difference at 0
at
Microsoft.VisualStudio.Editor.Implementation.VsTextBufferAdapter.OnTextBufferChanged(Object sender, TextContentChangedEventArgs e)
at
Microsoft.VisualStudio.Text.Utilities.GuardedOperations.RaiseEvent[TArgs](Object sender, EventHandler`1 eventHandlers, TArgs args)
I think I am quite close to the final solution; so what I am doing wrong?

Parallel foreach ConcurrentDictionary<string, string> add

I have entries like in a phone book: name + address.
The source is on a web site, the count is over 1K records.
Question is:
How do i use/implement ConcurrentDictionary with ParallelForeach?
I might as well ask will it better perform:
ConcurrentDictionary & ParallelForeach
vs
Dictionary & foreach
As the name is not allowed to have duplicates being the key, and i think i understood correctly that ConcurrentDictionary has its own built-in function to add(TryAdd) only if key does not exists.
so the issue of not allowing adding duplicated keys already taken cared of, so from that point i could clearly see the balance is turning towards ConcurrentDictionary rather than standard-sequential Dictionary
So how do I add name & address from any given data source and load it via Parallelforeach into a ConcurrentDictionary
the count is over 1K records.
How much over 1K? Because 1K records would be added in the blink of an eye, without any need for parallelization.
Additionally, if you're fetching the data over the network, that cost will vastly dwarf the cost of adding to a dictionary. So unless you can parallelize fetching the data, there's going to be no point in making the code more complicated to add the data to the dictionary in parallel.
This is quiet an old question, but this might help someone:
If you are trying to chunk through the ConcurrentDictionary and do some processing:
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Collections.Concurrent;
namespace ConcurrenyTests
{
public class ConcurrentExample
{
ConcurrentExample()
{
ConcurrentDictionary<string, string> ConcurrentPairs = new ConcurrentDictionary<string, string>();
Parallel.ForEach(ConcurrentPairs, (KeyValuePair<string, string> pair) =>
{
// Do Stuff with
string key = pair.Key;
string value = pair.Value;
});
}
}
}
I don't think you would be able to use Parallel.ForEach to be able to insert into a new dictionary unless you already had an object of same length that you were iterating over. i.e. an list with the URL's of text documents you were wanting to download and insert into the dictionary. If that were the case, then you could use something along the lines of:
using System.Threading.Tasks;
using System.Collections.Concurrent;
namespace ConcurrenyTests
{
public class ConcurrentExample
{
ConcurrentExample()
{
ConcurrentDictionary<string, string> ConcurrentPairs = new ConcurrentDictionary<string, string>();
ConcurrentBag<string> WebAddresses = new ConcurrentBag<string>();
Parallel.ForEach(WebAddresses, new ParallelOptions { MaxDegreeOfParallelism = 4 }, (string webAddress) =>
{
// Fetch from webaddress
string webText;
// Try Add
ConcurrentPairs.TryAdd(webAddress, webText);
// GetOrUpdate
ConcurrentPairs.AddOrUpdate(webAddress, webText, (string key, string oldValue) => webText);
});
}
}
}
If accessing from a webserver, you may want to increase or decrease the MaxDefreeOfParallelism so that your bandwidth is not choked.
Parallel.ForEach: https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.parallel.foreach?view=netcore-2.2
ParallelOptions: https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.paralleloptions?view=netcore-2.2

How to get direct show fillter?

i am recording video from webcam using DirectshowLib2005.dll in C#.net..i have this code to startVideoRecoding as below..
try
{
IBaseFilter capFilter = null;
IBaseFilter asfWriter = null;
IFileSinkFilter pTmpSink = null;
ICaptureGraphBuilder2 captureGraph = null;
GetVideoDevice();
if (availableVideoInputDevices.Count > 0)
{
//
//init capture graph
//
graphBuilder = (IFilterGraph2)new FilterGraph();
captureGraph = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
//
//sets filter object from graph
//
captureGraph.SetFiltergraph(graphBuilder);
//
//which device will use graph setting
//
graphBuilder.AddSourceFilterForMoniker(AvailableVideoInputDevices.First().Mon, null, AvailableVideoInputDevices.First().Name, out capFilter);
captureDeviceName = AvailableVideoInputDevices.First().Name;
//
//check saving path is exsist or not;if not then create
//
if (!Directory.Exists(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\"))
{
Directory.CreateDirectory(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\");
}
#region WMV
//
//sets output file name,and file type
//
captureGraph.SetOutputFileName(MediaSubType.Asf, ConstantHelper.RootDirectoryName + "\\Assets\\Video\\" + videoFilename + ".wmv", out asfWriter, out pTmpSink);
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
#endregion
//
//render the stram to output file using graph setting
//
captureGraph.RenderStream(null, null, capFilter, null, asfWriter);
m_mediaCtrl = graphBuilder as IMediaControl;
m_mediaCtrl.Run();
isVideoRecordingStarted = true;
VideoStarted(m_mediaCtrl, null);
}
else
{
isVideoRecordingStarted = false;
}
}
catch (Exception Ex)
{
ErrorLogging.WriteErrorLog(Ex);
}
if you observe this lines of code
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
it will apply video setting which is described on that GUID i got this GUID from file located at "C:\windows\WMSysPr9.prx"..
so my question is how create my own video setting with format,resolutions and all?
How to Record video using webcam in black and white mode or in grayscale?
so my question is how create my own video setting with format,resolutions and all?
GUID based profiles are deprecated, though you can still use them. You can build custom profile in code using WMCreateProfileManager and friends (you start with empty profile and add video and/or audio streams at your discretion). This is C++ API, and I suppose that WindowsMedia.NET, a sister project to DirectShowLib you are already using, provides you interface into .NET code.
Windows SDK WMGenProfile sample both shows how to build profile manually and provides you a tool to build it interactively and save into .PRX file you can use in your application.
$(WindowsSDK)\Samples\multimedia\windowsmediaformat\wmgenprofile
How to Record video using webcam in black and white mode or in grayscale?
The camera gives you a picture, then it goes through pipeline up to recording through certain processing. Ability to make it greyscale is not something inherent.
There are two things you might want to think of. First of all, if the camera is capable of stripping color information on capture, you can leverage this. Check it out - if its settings have Saturation slider, then you just put it input minimal value position and the camera gives you greyscale.
In code, you use IAMVideoProcAmp interface for this.
Another option, including if the camera is missing mentioned capability, is to apply post processing filter or effect that converts to greyscale. There is no stock solution for this, and otherwise there are several ways to achieve the effect:
use third party filter that strips color
export from DirectShow pipeline, convert data in code using Color Control Transform DSP (available starting Win Vista) or GDI functions
use Sample Grabber in the streaming pipeline and update image bits directly

Resources