Im on a task debugging the display plane configuration. In the code, I came across display-planes, sprite-planes and overlays. According to my knowledge Overlay constitutes the video data (for example) and the sprite will be the plane (in Hardware) that displays that data (video) when running. But sometimes, instead of sprite/sprite-planes simply overlay/overlay-planes are used. So, confusion occurs here which one will be used when. I need the clarification.
Related
1. The problem I've encountered
Hi, I'm currently making a desktop application with Electron.js. Meanwhile, I have needed a feature of taking a screenshot (including the mouse cursor) but this is a problem for me because I do not know how to do this.
I think the reason for me not to be able to solve this problem is that I have no knowledge about operating systems. I think the meaning of "taking a screenshot" is "getting the image data displayed on the computer monitor", but how I can access to that?
2. What I've tried or considered
At first I tried Electron.BrowserWindow.capturePage() but its result didn't meet my want. It is because of two reasons: 1) My application has a transparent background and wherever area of transparency becomes black if I take a screenshot. 2) Mouse cursor is not captured together.
Meanwhile, I am aware of the existence of some APIs such as Screen Capture API and Media Capture and Streams API (in web browsers) and perhaps I can give it a try because I'm using Electron.js and Electron.js uses Chromium web browser and web browsers have implementations of those APIs.
However, it is still a problem that what those APIs handle is media streams (= video), which is not suitable for my case. Of course I think it is possible to take only one frame(?) out of a media stream somehow, but I think it is an overwork, given that what I desire for is just a single screenshot.
Meanwhile, because Electron.js also uses Node.js, I think it is also somehow possible to call Windows API (maybe via Foreign Function Interface?) or to invoke child_process.exec() in order to take a screenshot.
3. The question I would like to ask
How can I access to the monitor image data? So that I can implement "the screenshot feature which meets my requirements--see-through & mouse cursor" (if uses of third-party libraries needed, as least as possible).
What calculates a final image data which is going to be displayed on my computer monitor? It seems that it is a work of my graphics card because my monitor and graphics card are connected each other with a cable.
4. Miscellaneous curiosities (not much related to the question)
...Yet it is another curiosity that how, why, and where the transparent area is processed as #000000 color.
Meanwhile it is also interesting that there are some programs which do not allow me to take a screenshot of contents on them--the area where the programs are located looks black. How could the developers of this kind of programs implement this?
Thank you for reading my question.
After some internet searches, I found it difficult to access and get display data (specifically, video ram data from my graphics card). So I decided to use a workaround--It is a well-known aphorism that 'all loads lead to Rome'.
Which means,
See-through screenshots can be achieved by either "using native screenshot feature (the PrintScreen key)" or "using some scripts that take a picture of the entire screen".
Screenshots with mouse cursor can be achieved by adding (= overlaying) mouse cursor image at the coordinates where my mouse cursor is located at.
However, in my case I do not actually need to save screenshots as files, so I think it is enough to just draw a custom mouse cursor image, hide the original mouse cursor image, make it follow the mouse cursor, and take a screenshot with a manual key press. (I think it is also a feasible option to take a screenshot with the PrintScreen key press, get the screenshot data from the clipboard and do some image processings like adding effects relevant to a mouse cursor.
※ I saw a code that simulates "key press" (SendKey()) in order to take a screenshot and I think this is a good approach because of no manual key press needed.
I think whom interested in this topic may find it helpful from the following links (the numerical order does not represent importance):
Keywords mentioned: GetDC(), BitBlt(), CAPTUREBLT flag, GDI
What is the best way to take screenshots of a Window with C++ in Windows?
How can I take a screenshot in a windows application?
Keywords mentioned: DirectX, buffer
Fastest method of screen capturing on Windows
How to save backbuffer to file in DirectX 10?
Keywords mentioned: mouse cursor, cursor image, hot spot
Capture screen shot with mouse cursor
C# - Capturing the Mouse cursor image
Python - Take screenshot including mouse cursor
Keywords mentioned: PowerShell, CopyFromScreen()
How can I do a screen capture in Windows PowerShell?
Capture screenshot of active window?
Q/A about accessing to video memory
DRM Access the whole video memory
raw video memory, video driver Access the whole video memory through OpenGL programming
graphics RAM API to get the graphics or video memory
direct data write to video memory
Direct video buffer access
How to write data directly into video memory?
Is direct video card access possible? (No API)
I have used pbrt to render my scene. I have specified the viewing angle in the scene file and on rendering it with pbrt I see the image from that specific viewing angle. I want to know if there exists a way by which I can rotate the scene rendered by pbrt using my mouse in real time
No.
To see if it is even possible, render a scene and time how ling it takes. In order to get it real-time you will need pbrt to render at least a few frames a second, preferably 60!
I don't think this is going to happen in 2016.
Alternatively you will need something like an OpenGL representation to perform the real-time interaction and then the rendered scene can only be displayed over the top (once the rendering has been finished). the frustums need to match in order for you to do this otherwise what the user interacts with will not be the same as what they see rendered.
If your editing the scene file, it sounds like your not in coding land and so the only possibility is to write some program that can display the scene (in GL) and update the scene file information to be the same as the current camera and render using pbrt. Its all going to take a long time (pbrt needs to parse the file each time, and re-buffer all the geometry) since supplying the file means pbrt won't save anything from the previous state and so will have to construct acceleration structures etc as well as rendering the scene. Each frame!
Even in code pbrt is not going to give you great performance. It's not designed for that, more to be a physically accurate path tracer (as the name suggests). In order to get anything remotely near real-time, you'll need some bad ass acceleration structures and better command of the light model you are using. If you really are interested your probably need to write your own renderer. Look into Metropolis Light Transport (MLT) and Vertex connect merge (VCM), which are much more refined/efficient models using Monte Carlo method.
Plus some pretty decent hardware with lots of cores, or a decent gfx card if wish to employ SIMD through Cuda or equivalent.
[EDIT] Also note that the pbrt renderer, is based on a book "Physically Based Rendering (From Theory to Implementation)" ISBN-13: 978-0123750792. Which outlines how to implement your own version of pbrt.
I'm working on a little GIS app using OSG, but i'm quite a newbie with it.
As the view is not changing a lot, i'm not struggling to keep a decent fps ratio.
I have to draw multiple layers on the same view. Layers may overlaps, but not always.
Right now, to be able to choose which layer is on top of the others, i'm using the PolygonOffet properties, but I don't like it.
Here is what I want to try :
-put a clear node on my root to clear all the buffers
-put a clear node on top of each layer node to clear only the depth buffer
-find a way to force OSG to draw those layer in a specific order
So my question is :
- Is it possible ?
- How can I choose the rendering order of my layer's node ?
You can clear buffers with cameras. E.g. if your main camera draws everything and you want to only clear one buffer, you can add a second camera that does nothing than just having the same renderTargetImplementation, attach the DepthBuffer to it, having the GL_DEPTH_BUFFER_BIT clearMask and let it render after your main camera.
For cameras you can choose a rendering order via setRenderOrder and for nodes you can work with setRenderBin.
If you use the multiple camera solution, you have a multipass rendering though, that may get costly, since in your case you would probably draw each layer with it's own camera.
On a sidenote, what you want is to avoid z-fighting and there are several techniques to do so. Maybe with this keyword you can find an answer.
I was wondering if there was an existing API for tracking the top of people heads with the Kinect. e.g., the Kinect is facing downwards from a ceiling.
If not, how might I implement such a thing with its depth data.
No. The Kinect expects to be facing a standing (or seated, given the appropriate flag) human. All APIs (official or 3rd party) that have a notion of skeleton tracking expect this.
If you wish you track someone from above, you will need to use a library such as OpenCV (or EmguCV, for C# development). Well, you don't have to, but they offer utilities to help with computer vision and image processing. These libraries don't care if you are using a Kinect or just a regular RGB camera.
Using the Kinect from above, you could use the depth data to help locate and track blobs. With the Kinect at a known distance from the floor, have a few people walk under it and see what z-coordinates you get out of it -- you can then assume that anything within a certain z-coordinate range is a person walking across the screen (vs. a cat, or something else).
You will need to use standard image processing techniques (see OpenCV reference above) to initially find the blobs within the image. Once found, the depth data from the Kinect might be useful but I think you'll find it isn't ultimately necessary if you're just watching people walk across the floor.
We built a Kinect-driven experience where the sensors had to point downward to detect users walking along a wall. We used openTSPS to do all the work of taking the camera input and doing blob detection and handing off tracked "persons" to (in our case) a Processing app. It works really well for us.
http://opentsps.com/
Given two seperate computers, how could one ensure that colours are being projected roughly the same on each screen?
IE, one screen might have 50% brightness more than another, so colours appear duller on one screen. One artist on one computer might be seeing the pictures differently to another, it's important they are seeing the same levels.
Is there some sort of callibration technique via software you can do? Any techniques? Or is a hardware solution the only way?
If you are talking about lab-critical calibration (that is, the colours on one monitor need to exactly match the colours on another, and both need to match an external reference as closely as possible) then a hardware colorimeter (with its own appropriate software and test targets) is the only solution. Software solutions can only get you so far.
The technique you described is a common software-only solution, but it's only for setting the gamma curves on a single device. There is no control over the absolute brightness and contrast; you are merely ensuring that solid colours match their dithered equivalents. That's usually done after setting the brightness and contrast so that black is as black as it can be and white is as white as it can be, but you can still distinguish not-quite-black from black and not-quite-white from white. Each monitor, then, will be optimized for its own maximum colour gamut, but it will not necessarily match any other monitor in the shop (even monitors that are the same make and model will show some variation due to manufacturing tolerances and age/use). A hardware colorimeter will (usually) generate a custom colour profile for the device under test as it is at the time of testing, and there is generally and end-to-end solution built into the product (so your scanner, printer, and monitor are all as closely matched as they can be).
You will never get to an absolute end-to-end match in a complete system, but hardware will get you as close as you can get. Software alone can only get you to a local maximum for the device it's calibrating, independent of any other device.
What you need to investigate are color profiles.
Wikipedia has some good articles on this:
https://en.wikipedia.org/wiki/Color_management
https://en.wikipedia.org/wiki/ICC_profile
The basic thing you need is the color profile of the display on which the color was seen. Then, with the color profile of display #2, you can take the original color and convert it into a color that will look as close as possible (depends on what colors the display device can actually represent).
Color profiles are platform independent and many modern frameworks support them directly.
You may be interested in reading about how Apple has dealt with this issue:
Color Programming Topics
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DrawColor/DrawColor.html
You'd have to allow or ask the individual users to calibrate their monitors. But there's enough variation across monitors - particularly between models and brands - that trying to implement a "silver bullet" solution is basically impossible.
As #Matt Ball observes calibrating your monitors is what you are trying to do. Here's one way to do it without specialised hardware or software. For 'roughly the same' visual calibration against a reference image is likely to be adequate.
Getting multiple monitors of varying quality/brand/capabilities to render a given image the same way is simply not possible.
IF you have complete control over the monitor, video card, calibration hardware/software, and lighting used then you have a shot. But that's only if you are in complete control of the desktop and the environment.
Assuming you are just accounting for LCDs, they are built different types of panels with a host of different capabilities. Brightness is just one factor (albeit a big one). Another is simply the number of colors they are capable of rendering.
Beyond that, there is the environment that the monitor is in. Even assuming the same brand monitor and calibration points, a person will perceive a different color if an overhead fluorescent is used versus an incandescent placed next to the monitor itself. At one place I was at we had to shut off all the overheads and provide exact lamp placement for the graphic artists. Picky picky. ;)
I assume that you have no control over the hardware used, each user has a different brand and model monitor.
You have also no control over operating system color profiles.
An extravagant solution would be to display a test picture or pattern, and ask your users to take a picture of it using their mobile or webcam.
Download the picture to the computer, and check whether its levels are valid or too out of range.
This will also ensure ambient light at the office is appropiate.