How to access the Camera itself in Flutter/Dart? - android-studio

I am creating an application that allows you to take pictures. I wonder how I can access the camera itself. To take the photos I use the package image_picker, but with that I can only take the photos, and I have no influence on the camera itself. For example, I would like to adjust the size of the camera screen, turn the flash on automatically, and indicate some sort of guideline of where the object to be photographed should be. See the photo below for an example.
Can someone tell me how I could make those adjustments in the camera itself, instead of just taking a picture with it?

Use the camera plugin, it provides even more features such as making snapshots, recording video, and accessing the image within the app itself.

Related

How can I access to the monitor image data?

1. The problem I've encountered
Hi, I'm currently making a desktop application with Electron.js. Meanwhile, I have needed a feature of taking a screenshot (including the mouse cursor) but this is a problem for me because I do not know how to do this.
I think the reason for me not to be able to solve this problem is that I have no knowledge about operating systems. I think the meaning of "taking a screenshot" is "getting the image data displayed on the computer monitor", but how I can access to that?
2. What I've tried or considered
At first I tried Electron.BrowserWindow.capturePage() but its result didn't meet my want. It is because of two reasons: 1) My application has a transparent background and wherever area of transparency becomes black if I take a screenshot. 2) Mouse cursor is not captured together.
Meanwhile, I am aware of the existence of some APIs such as Screen Capture API and Media Capture and Streams API (in web browsers) and perhaps I can give it a try because I'm using Electron.js and Electron.js uses Chromium web browser and web browsers have implementations of those APIs.
However, it is still a problem that what those APIs handle is media streams (= video), which is not suitable for my case. Of course I think it is possible to take only one frame(?) out of a media stream somehow, but I think it is an overwork, given that what I desire for is just a single screenshot.
Meanwhile, because Electron.js also uses Node.js, I think it is also somehow possible to call Windows API (maybe via Foreign Function Interface?) or to invoke child_process.exec() in order to take a screenshot.
3. The question I would like to ask
How can I access to the monitor image data? So that I can implement "the screenshot feature which meets my requirements--see-through & mouse cursor" (if uses of third-party libraries needed, as least as possible).
What calculates a final image data which is going to be displayed on my computer monitor? It seems that it is a work of my graphics card because my monitor and graphics card are connected each other with a cable.
4. Miscellaneous curiosities (not much related to the question)
...Yet it is another curiosity that how, why, and where the transparent area is processed as #000000 color.
Meanwhile it is also interesting that there are some programs which do not allow me to take a screenshot of contents on them--the area where the programs are located looks black. How could the developers of this kind of programs implement this?
Thank you for reading my question.
After some internet searches, I found it difficult to access and get display data (specifically, video ram data from my graphics card). So I decided to use a workaround--It is a well-known aphorism that 'all loads lead to Rome'.
Which means,
See-through screenshots can be achieved by either "using native screenshot feature (the PrintScreen key)" or "using some scripts that take a picture of the entire screen".
Screenshots with mouse cursor can be achieved by adding (= overlaying) mouse cursor image at the coordinates where my mouse cursor is located at.
However, in my case I do not actually need to save screenshots as files, so I think it is enough to just draw a custom mouse cursor image, hide the original mouse cursor image, make it follow the mouse cursor, and take a screenshot with a manual key press. (I think it is also a feasible option to take a screenshot with the PrintScreen key press, get the screenshot data from the clipboard and do some image processings like adding effects relevant to a mouse cursor.
※ I saw a code that simulates "key press" (SendKey()) in order to take a screenshot and I think this is a good approach because of no manual key press needed.
I think whom interested in this topic may find it helpful from the following links (the numerical order does not represent importance):
Keywords mentioned: GetDC(), BitBlt(), CAPTUREBLT flag, GDI
What is the best way to take screenshots of a Window with C++ in Windows?
How can I take a screenshot in a windows application?
Keywords mentioned: DirectX, buffer
Fastest method of screen capturing on Windows
How to save backbuffer to file in DirectX 10?
Keywords mentioned: mouse cursor, cursor image, hot spot
Capture screen shot with mouse cursor
C# - Capturing the Mouse cursor image
Python - Take screenshot including mouse cursor
Keywords mentioned: PowerShell, CopyFromScreen()
How can I do a screen capture in Windows PowerShell?
Capture screenshot of active window?
Q/A about accessing to video memory
DRM Access the whole video memory
raw video memory, video driver Access the whole video memory through OpenGL programming
graphics RAM API to get the graphics or video memory
direct data write to video memory
Direct video buffer access
How to write data directly into video memory?
Is direct video card access possible? (No API)

Extent of support for using Vulkan Swapchain Images as Transfer Destination

In my Vulkan backend implementation I currently check the supported usage flags for the Swapchain and then proceed to either use copy commands or a fall-back render pass to draw to the back buffer from an intermediary Render Target. I wanted to know whether this check is required or is it safe to assume that the Swapchain Images allow usage as a Transfer Destination on typical Desktop hardware.
Also, if anyone knows about Vulkan implementations that do not allow Copying to Swapchain Images, I'd appreciate it if you could share. This is mostly for the sake of curiosity rather than solving a problem.
You can look at the Vulkan Hardware Database.
I couldn't find anywhere which summarized the data, but if you click on a device from the list then click onto the surface tab, then on the surface properties tab you can see supportedUsageFlags in the table and look for TRANSFER_DST_BIT.
I only looked at a few and they all had TRANSFER_DST_BIT present. I believe the database and code for the viewer are open source, so perhaps you can find a better way to mine the particular information you're after.

Controlling depth buffer and drawing order in OpenSceneGraph

I'm working on a little GIS app using OSG, but i'm quite a newbie with it.
As the view is not changing a lot, i'm not struggling to keep a decent fps ratio.
I have to draw multiple layers on the same view. Layers may overlaps, but not always.
Right now, to be able to choose which layer is on top of the others, i'm using the PolygonOffet properties, but I don't like it.
Here is what I want to try :
-put a clear node on my root to clear all the buffers
-put a clear node on top of each layer node to clear only the depth buffer
-find a way to force OSG to draw those layer in a specific order
So my question is :
- Is it possible ?
- How can I choose the rendering order of my layer's node ?
You can clear buffers with cameras. E.g. if your main camera draws everything and you want to only clear one buffer, you can add a second camera that does nothing than just having the same renderTargetImplementation, attach the DepthBuffer to it, having the GL_DEPTH_BUFFER_BIT clearMask and let it render after your main camera.
For cameras you can choose a rendering order via setRenderOrder and for nodes you can work with setRenderBin.
If you use the multiple camera solution, you have a multipass rendering though, that may get costly, since in your case you would probably draw each layer with it's own camera.
On a sidenote, what you want is to avoid z-fighting and there are several techniques to do so. Maybe with this keyword you can find an answer.

How to capture still image from a live video programatically

When i try to take a screenshot of my desktop I found the area of the Windows Media Player window was empty, nothing in it, I google for it for a while and found that most of video players user Overlay surfaces for performance, and overlay surfaces can not be caputured, so some ideas come out said to disable the DDraw accelaration so that you can grap an still image from a live video, but when the player was launched, it's already use the hardware accelaration, even i disable hardware accelaration, it will not take effect until i relaunch the player, my question is: how to capture a image from a live video without diasble the ddraw accelaration? or how to make the settings(disable hardware accelaration) work work without relaunch the video player?
I won't play the vedio with my program, i just want to take a still
image while it is played by a 3rd party player such as Windows Media
Player or Real player etc...
I want to do this programatically, say
by C/C++ and DirectX, so I don't want to use any exsisting software
or tools
No matter which player in use, my program should capture it, I know some tool can do this like CapTrue and tencent qq, so i think it is possible to do so.
A workaround can be to use vlc to play your file. It gives a screenshot option in it directly.
AFAIK, this is an intentional "feature" in WMP, for protection. If you need to have WMP, then you need a decent screengrabber. Unfortunately, the ones I know like hypersnap are not free.
If you only want a screengrab of a frame, VLC is your friend, like #zdd said.

When playing tours authored in KML, is it possible to dynamically to control the camera?

Using the Google Earth plugin AI, I want to play a tour authored in KML with the touring capability, but let the user modify the camera controls during the play.
Is it possible?
It depends on how much modification you want to allow.
Tour playback is designed to work with the user changing the orientation of the view (via dragging or the camera controls), but not the position. If the user stops changing the view for long enough, the camera will smoothly snap back to the default orientation for that point in the tour. The zoom and panning controls disappear during the tour, but if the user tries to change the camera position via other methods (like the keyboard), the tour will typically be paused.
The Earth API, however, allows you to absorb or change any of those event behaviors, since you can add a listener for mouse and keyboard events and prevent them from processing as usual or act on them in a completely different way.
If you haven't tried it, there's a tour example in the Google Code Playground where you can see what happens with different interactions based on the default event responses.
Finally, if you want really custom tour behavior -- like allowing certain kinds of movement of the camera away from the tour path even as the tour continues -- you will most likely need to write your own camera movement code. Getting the basics of this working isn't too difficult, but getting the right intuitive feel for that kind of interaction is difficult, and probably dataset-dependent. To get started, you can parse the KML directly, find the tour and the tour primitives it contains, and then use the regular camera controls you cited to move between those primitives, adding offsets for any user-supplied movements.
edit: the Earth API tour page cited in the question has an example of getting started with parsing the KML file by getting the plugin to do it for you. You can use this to implement the above suggestion by using the KML DOM walking code to find all the tour primitives (instead of halting as soon as a Tour element is found).
This isn't always the most efficient approach (plugin function calls have overhead, and meanwhile browsers have built-in XML parsing capabilities), but it may be the most straightforward way to start. For many tours, this approach would be perfectly sufficient.
It is possible, but pretty hard to implement and even harder to control well. I have been playing around with trying to do this for quite a while now. I have not had much success myself, but here are two example by others who have made some progress.
Firstly, the underlying principle they are using is based upon the TICK - a simple example of it is here
http://earth-api-samples.googlecode.com/svn/trunk/examples/event-frameend.html
The two example are :
http://maps.myosotissp.com/
and
http://racemyrace.com/race.php
Also, here is an example that used to work up until recently, I am not sure why it has stopped but it appears you can still read the JS being used. It is made by the same person who created the racemyrace website
http://www.thekmz.co.uk/GEPlugin/pathtour/v3/path_tour_v3.htm
If you happen to work something out, I would appreciate you creating a simple example page and sharing the link. It will probably take a while so if you could look up my email via profile and notify me that would be even better.
Good Luck!

Resources