How can I access the frames captured by Camera from the video buffer in Android 4.1 - android-ndk

I wondering if I can do that in Android space.
The version I am using is Android 4.1

In general, I think you can use the JNI and if you know what classes to use you can get the buffer you ask for.
Try chewing on the 2 links: here and here
Note that before ICS, the stack is very different from what is presented in these 2 links. So, what you come up with for ICS wont work on 2.3.3 or on 3.x.x.

Related

What is the purpose of /drawable-v14 or /drawable-v11?

I've seen that some Google's or other open source projects have resource directories like /drawable-v14 or /drawable-hdpi-v11.
Now, I understand what this means: all devices with SDK larger or equal than v11/v14 should use these images.
But what is the purpose of this? Why and when should I use them? Why devices of HDPI resolution and SDK v11 should ever use images different than HDPI devices and SDK 10?
I just cannot see when I will ever use one image for SDK 10 and another for SDK 17, for example. Makes no sense to me.
As a side note, the usage of resources /values-v{11/14/17} is logical and has the practical benefit.
This can be use in order to style your icons to the current UI guidelines on the given Android version.
Android has had a lot of evolution on its GUI style from its beginning. In Cupcake, icons had to show a 3D effect with a shadow. With ICS, there is more flat icons. And it will keep on changing with android 5 and more... (Let's watch the Google i/o 2014 to know more about it! ... by the way: its today!)
So basically you can stick to the GUI guidelines even from different Android versions. It's probably not the only use case but it is one of them.

Replace Android with real Linux in smartphones?

I always wondered if, android is a linux derived SO, why I can't just compile a Gentoo (for example) with correct arch, and little modifications in boot to fully replace android.
I know that I may miss some functions, but having already kernel sources, and existing AOSP ROMs, and even, with UbuntuOS, I think it should be possible.
The only problem I have is that I don't know anything about the boot process, so even if I could manage to build an image, I wouldn't know how to boot it.
Any hints, information, or advises about this area?
P.S. In this case is a Samsung Galaxy S3 (I-9300), but could be applicable to other brands as well.

Direct3D11 GL_RGBA4 equivalent?

Is there an equivalent GL_RGBA4 texture format for D3D11, I can't seem to find it.
There is DXGI_FORMAT_B5G6R5_UNORM and DXGI_FORMAT_B5G5R5A1_UNORM 16 bit formats, but not the 4444 one.
Even D3D9 has all of them, so I don't understand why D3D11 would not...
Never mind i'm using the old D3D11 SDK and it is called "DXGI_FORMAT_B4G4R4A4_UNORM" in the Windows 8 SDK.

Does android supports SVG or Tiny SVG?

does android supports SVG or Tiny SVG ? I am having doubt like if i want to publish my application to android market which will cover different sizes of android devices then in that case i need to create same images with different densities, sizes etc.. and need to put on the different folders as specified in android developer guides. I just want to avoid it because it will unnecessarily increase the apk file size so rather than using this approach can we create the vector graphics file and store all image related information and add it in to the apk file.
but I am not able to find whether vector graphics approach will work in android or not and if it is working then how to use it?
Please provide me some valuable information about it.
Regards,
Piks
I found TinyLine. I have not tried it, but it seems to be sophisticated.

OpenGL without X.org in linux

I'd like to open an OpenGL context without X in Linux. Is there any way at all to do it?
I know it's possible for integrated Intel graphics card hardware, though most people have Nvidia cards in their system. I'd like to get a solution that works with Nvidia cards.
If there's no other way than through integrated Intel hardware, I guess it'd be okay to know how it's done with those.
X11 protocol itself is too large and complex. Mouse/Keyboard/Tablet input multiplexing it provides is too watered-down for modern programs. I think it's the worst roadblock that prevents Linux desktop from improving, which is why I look for alternatives.
Update (Sep. 17, 2017):
NVIDIA recently published an article detailing how to use OpenGL on headless systems, which is a very similar use case as the question describes.
In summary:
Link to libOpenGL.so and libEGL.so instead of libGL.so. (Your linker options should therefore be -lOpenGL -lEGL
Call eglGetDisplay, then eglInitialize to initialize EGL.
Call eglChooseConfig with the config attribute EGL_SURFACE_TYPE followed with EGL_PBUFFER_BIT.
Call eglCreatePbufferSurface, then eglBindApi(EGL_OPENGL_API);, then eglCreateContext and eglMakeCurrent.
From that point on, do your OpenGL rendering as usual, and you can blit your pixel buffer surface wherever you like. This supplementary article from NVIDIA includes a basic example and an example for multiple GPUs. The PBuffer surface can also be replaced with a window surface or pixmap surface, according to the application needs.
I regret not doing more research on this on my previous edit, but oh well. Better answers are better answers.
Since my answer in 2010, there have been a number of major shakeups in the Linux graphics space. So, an updated answer:
Today, nouveau and the other DRI drivers have matured to the point where OpenGL software is stable and performs reasonably well in general. With the introduction of the EGL API in Mesa, it's now possible to write OpenGL and OpenGL ES applications on even Linux desktops.
You can write your application to target EGL, and it can be run without the presence of a window manager or even a compositor. To do so, you would call eglGetDisplay, eglInitialize, and ultimately eglCreateContext and eglMakeCurrent, instead of the usual glx calls to do the same.
I do not know the specific code path for working without a display server, but EGL accepts both X11 displays and Wayland displays, and I do know it is possible for EGL to operate without one. You can create GL ES 1.1, ES 2.0, ES 3.0 (if you have Mesa 9.1 or later), and OpenGL 3.1 (Mesa 9.0 or later) contexts. Mesa has not (as of Sep. 2013) yet implemented OpenGL 3.2 Core.
Notably, on the Raspberry Pi and on Android, EGL and GL ES 2.0 (1.1 on Android < 3.0) are supported by default. On the Raspberry Pi, I don't think Wayland yet works (as of Sep. 2013), but you do get EGL without a display server using the included binary drivers. Your EGL code should also be portable (with minimal modification) to iOS, if that interests you.
Below is the outdated, previously accepted post:
I'd like to open an OpenGL context without X in linux. Is there any way at all to do it?
I believe Mesa provides a framebuffer target. If it provides any hardware acceleration at all, it will only be with hardware for which there are open source drivers that have been adapted to support such a use.
Gallium3D is also immature, and support for this isn't even on the roadmap, as far as I know.
I'd like to get a solution that works with nvidia cards.
There isn't one. Period.
NVIDIA only provides an X driver, and the Nouveau project is still immature, and doesn't support the kind of use that you're looking for, as they are currently focused only on the X11 driver.
You might be interested in a project called Wayland
http://en.wikipedia.org/wiki/Wayland_%28display_server%29
Have you looked at this page?
http://virtuousgeek.org/blog/index.php/jbarnes/2011/10/31/writing_stanalone_programs_with_egl_and_
It is likely a bit outdated. I haven't tried yet, but I would appreciate more documentation of this type.
Probably a good idea, as of today, is to follow Wayland compositor-drm.c implementation:
http://cgit.freedesktop.org/wayland/weston/tree/src/compositor-drm.c
https://gitlab.freedesktop.org/mesa/kmscube/ is a good a reference implementation of OGL (or OGLES) hardware-accelerated rendering without an X11 or wayland dependency.
You can look at how Android has solved this issues. See Android-x86 project.
Android uses mesa with egl and opengles. Android has its own simple Gralloc component for mode setting and graphic allocations. On top of that they have SurfaceFlinger component which is a composition engine, which uses OpenGLES for acceleration.
Cannot see why couldn't you use these components in similar way and even reuse the Android glue code.

Resources