I have a web based AR app using A-Frame. Is there a way I can get camera intrinsics data (focal length, principal point, pose, frame etc.) from the web?
This question was asked over 7 years ago here, so I was wondering if there were any updates:
Generic web camera calibration
I've explored getUserMedia(), but that will only provide video streams (tracks) and the properties related to such (facing mode, frameRate, height, width). This is not what I need.
DeviceOrientationEvent (https://developers.google.com/web/fundamentals/native-hardware/device-orientation) uses the accelerometer, compass and gyroscope in phones but not the camera.
No, it’s not possible. There are no Web APIs that give you that info.
Related
I'm trying to preview video stream from 4K Camera (Brio) in my application. The application uses DirectShow to open camera and receive frames. Filter configuration is shown in image below.
The problem are high resolutions (ie. 4096x2160). With 4096x2160 resolution both GraphEdit and my application have delay when I preview video stream.
I'm testing this on Windows 10. Note that Windows 10 preinstalled Camera application works perfect with this resolution. I've also tried the same with UWP sample using MediaCapture Api, but the problem is the same.
What am I missing?
Windows 10 preinstalled Camera application does not use DirectShow, uses completely different code path based on Media Foundation API and is overall more efficient in JPEG decompression in particular. That is, you cannot compare directly your DirectShow based graph to what Windows Store Camera app is doing.
In your situation MJPEG Decompressor Filter is an outdated piece of software incompatible with this resolution and is a bottleneck. Also for live video DirectShow graph needs to have Smart Tee Filter.
Performance wise I would recommend to build media pipeline on Media Foundation, even though it is more difficult and comes with less documentation and samples.
An article on Hackaday piqued my curiosity, and I see Kinect + Linux questions being asked here (mostly about configuration), so I'll venture this question:
It is clear to me that Kinect can be used together with Linux on a "regular pc" -- but I can't help wondering why, that is, what might you actually use this for?
I don't suppose people really like the human/computer interface presented in movies such as "Minority Report" -- surely, nobody is actually doing text editing, coding, or business data processing by "hand-waving". So besides just games & exercises, what are examples of actual, real-world, useful (ie. 'professional') applications of such a setup?
For instance, can it be used for 3D scanning of real-world objects to obtain digital models? What sort of accuracy would such a scan yield?
The Kinect can be used for a wide variety of useful applications. I'm not sure if you are asking specifically about Linux or if Windows ("regular PC") is acceptable, but I'll provide you with some examples that come to mind.
For Linux specifically, it is likely that applications on Linux are using the sensor's raw sensor data only, rather the skeletal tracking feature. Many Kinect applications are on Windows because Microsoft's Kinect SDK is available only on Windows, and it provides the best skeletal tracking accuracy to-date.
You are right that the Kinect is rarely used where a keyboard & mouse would be faster and more accurate, but note that it is potentially relevant for accessibility.
And yes, it can be used for 3D scanning of real-world objects. I'm not sure about the exact accuracy, but I think it is acceptable for many applications. The main benefits are its low cost and speed.
For examples of 3D scanning, check out:
KinectFusion, a Microsoft Research project
Occipital Structure sensor for 3D scanning. (This is not the Kinect sensor, but provides an example application for 3D scanning. The company has a Kinect-related history as well.)
Styku - 3D body scanning for clothes fitting
Aside from 3D scanning, here are some other examples of applications:
Atlas5D - at-home patient monitoring
GestSure - 'Minority Report' interface for surgical rooms
Jintronix - games, exercises, assessments for physical therapy
There are many depth sensors like the Kinect3D on the market. The latest notable application would be iPhone X's depth sensor and FaceID. Many companies in the space are working actively in FaceID now, which would also be useful on Linux. Check out Microsoft's Window Hello biometric facial ID system - see Microsoft's official website:
Manufacturing of the Kinect sensor and adapter has been discontinued,
but the Kinect technology continues to live on in products like the
HoloLens, Cortana voice assistant, the Windows Hello biometric facial
ID system, and a context-aware user interface.
Kinect has applications in the robotics community as well, though I don't know the specifics. I assume many in robotics community use Linux when working with the Kinect. The depth and color cameras can be used to provide vision and the microphone array for audio input.
Generally, the Kinect had a big impact when it was released not just because of its technology but also because of its low price point, even if it's not the most accurate for every application. As this technology improves, I hope many other applications will emerge and become mainstream.
EDIT: also, check out this Hacker News discussion: "Microsoft Has Stopped Manufacturing The Kinect"
According to this picture:
the iPhone X has an infrared camera. It is primarily used for face detection but there are other uses for infrared. Can it be accessed directly?
Still not sure about infrared but to read the depth information from the iPhone X TrueDepth camera, we can use the AVDepthData class and related APIs. Here's a tutorial.
not via the approved iOS API library calls. But undocumented API should be possible if you guess how to do it.
The company, reallusion.com makes a 3D animation product called 'iclone 7' which interfaces with the iPhone infrared camera. Try contacting them about how they do it. if not proprietary, they might at least give you a clue.
I recently downloaded a barcode reading application for my phone, an LG KU990i (AKA the Viewty) However, there's a problem that renders the application nearly useless: the Viewty has 2 cameras -- the main one, and a secondary camera located on the face of the unit -- and it is the secondary camera that is unfortunately set as the phone's default video capture device. As you can't point the secondary at anything and see what it's pointing at at the same time, it makes it a bit difficult to snap a barcode!
According to the JSR-135 spec, it is possible to specify a video capture device other than the default... if you know the device name. This does not appear to be documented anywhere on LG's Web site, nor does the JSR-135 spec describe any way of enumerating the devices on a phone... or is there? Failing that, are there any naming conventions for video devices commonly in use that LG might be using?
I've logged a ticket with LG, but as it's an old device, I don't imagine them breaking their backs in getting back to me... I should also point out that this is purely for my own curiosity so no-one here should feel obliged to break their backs either!
As far as I know there is no way to get list of all available catpure:// urls.
All urls I know:
capture://image,
capture://video
capture://devcam0
capture://devcam1
Source:
http://www.forum.nokia.com/info/sw.nokia.com/id/bc00e4ce-7df3-4527-962c-d39843a808d0/MIDP_Mobile_Media_API_Support_In_Nokia_Devices_v1_0_en.pdf.html
LG responded to my support ticket. Apparently, it's not possible to access the primary camera on the Viewty from Java, making it pretty much useless for barcode scanning. Answer reproduced here for search engines.
You support ticket has been answered. Please visit the LG Mobile Developer Network and login to check the answer at [My Page > My Tickets].
KU990i default video capture device is the secondary camera
Answer :
Hi,
KU990i have to Two camera module
differently.
Main camera using Joran chipset and
sub(front camera) using Qualcomm
chipset.
Joran chip doesn’t supported JSR135.
Therefore, we couldn’t supported to
the JSR135 using for main camera.
(it is H/W limitation)
It was inform to operator already and
we remember operator was confirm it.
So that, we only supported sub camera
for JSR135.
BR,
I am working on project where I need to catch the image capture event.
It's for nokia N73 having platform S60 3rd edition.
Is there any possible way using J2ME only (without using symbian).
Description:
J2ME application running in background, on click of capturing image from camera J2ME application initiates and comes in front. Takes the captured image and transfers it to J2ME app and displays on screen.
if not possible using J2ME , Is there any possible way using symbian? can anyone provide tutorial or code snippet?
Thank you.
Regards,
Rajiv
Not possible to access the native camera from J2ME. You'd need to get the user to start your app first, then access the camera from your app (using JSR 135, spec here, introduction and examples here). Then you can use the captured image however you wish.
HTH
The N73 in particular has a fairly large hardware limitation when you want to use the camera.
You need to have the user manually open the camera cover before you can use the camera.
This launches the native camera application included in S60.
The user then needs to close that application.
From that point on, J2ME can use the camera, via the mobile media API defined in JSR-135.
If the user reboots the phone, the camera cover needs to be re-opened before J2ME can use the camera again.
You may have better luck using J2ME and JSR-135 to capture images using the front camera on the N73.
I seriously doubt that J2ME would see the user pressing the camera key in javax.microedition.lcdui.Canvas.keyPressed();
JSR-135 doesn't really provide a system-wide camera capture event for J2ME.