Emscripten and empty square - linux

I have a small problem that disturbs me after compiling with Emscripten an OpenGL / GLUT code.
I can compile with gcc and emcc.
I just have a warning about-nostdinc + + Maios under different codes (this has never been a problem for me)
Code compiled with gcc works fine.
But the generated html page displays only a black square.
The code is generated but nothing appears
Do you have any idea why?

According to https://github.com/kripken/emscripten/wiki/OpenGL-support the support is stable for features that are directly available in WebGL and OpenGL-ES-2. Legacy, fixed function pipeline OpenGL code is not yet fully supported.
I suggest you rewrite your program to follow modern OpenGL principles, i.e. don't use built in matrices, use generic vertex attributes, vertex buffer objects, use shaders (vertex and fragment); within the featureset providede OpenGL-ES-2.

Related

Force existing OpenGL application to render offscreen on a headless machine

I want to create a framework for automated rendering tests for video games.
I want to test an application that normally renders to a window with OpenGL. Instead, I want it to render into image files for further evaluation. I want to do this on a Linux server with no GPU.
How can I do this with minimal impact on the evaluated application?
Some remarks for clarity:
The OpenGL version is 2.1, so software rendering with Mesa should be possible.
Preferably, I don't want to change any of the application code. If there is a solution that allows me to emulate a X server or something like that, I would prefer it.
I don't want to change any of the rendering code. If it is really necessary, I can change the way I initialize OpenGL, but after that, I want to execute arbitrary OpenGL code.
Ideally, your answer would explain how to set up an environment on a headless Linux server that allows me to start arbitrary OpenGL binaries and render its output into images. If that's not possible, I am open for any suggestions.
Use Xvfb for your X server. The installation of Mesa deployed on any modern Linux distribution should automatically fall back to software rasterization if no supported GPU is found. You can take screenshots with any X11 screen grabber program; heck even ffmpeg -i x11grab will work.
fbdev/miniglx might be something that you are looking for. http://www.mesa3d.org/fbdev-dri.html I haven't used it so I have no idea if it works for your purpose or not.
Alternative is to just start and xserver without any desktop environment with xinit. That setup is using well tested code paths making it better suited for running your test. miniglx might have bugs which none has noticed because it isn't used everyday.
To capture the rendering output to images could be done with LD_PRELOAD trick to wrap glXSwapBuffers. Basic idea is to add your own swapbuffers function in between your application and gl library where you can use glReadPixels to download rendered frame and then use your favorite image library to write that data to image/video files. After the glReadPixels has completed you can call to library glXSwapBuffers to make swap happen like it would happen in real desktop.
The prog subdirectory has been removed from main git repository and you can find it from git://anongit.freedesktop.org/git/mesa/demos instead.

What text rendering library Chrome uses for Linux?

Are they using freetype or xfonts or cairo or something else? Maybe their own made library? I am thinking to use the same library in my program as well. I think what Google uses will be well maintained for long time.
The accepted answer is not correct. Under the hood, Skia uses Uniscribe on Windows, HarfBuzz on Linux + ChromeOS, and CoreText on the Mac. (Actually it's possible at this point, much of the Uniscribe and CoreText code has also been replaced with HarfBuzz for consistency.) Skia is used only to draw glyphs after the the shaping/layout code has done its job.
https://skia.org/docs/user/tips/ : "Skia does not shape text. Skia provides interfaces to draw glyphs, but does not implement a text shaper. Skia’s client’s often use HarfBuzz to generate the glyphs and their positions, including kerning."
Chrome uses Skia for nearly all graphics operations, including text
rendering. GDI is for the most part only used for native theme
rendering; new code should use Skia.
http://www.chromium.org/developers/design-documents/graphics-and-skia
Skia is a complete 2D graphic library for drawing Text, Geometries, and Images.
Skia Project Page:
http://code.google.com/p/skia/

What is the minimal set of essential libraries for Face Detection in OpenCV

While trying to use OpenCV for face detection on Windows, I need to pull in almost all the libraries (2d, 3d, ml, gui etc.). Otherwise my program wouldn't run. I am not really sure why I need any GUI for something as algorithmic as object detection. What is the minimal set of libraries required and is there a special way to build OpenCV such that there aren't that many dependencies?
You need opencv_core to get base objects like cv::Mat, opencv_imgproc to use thresholds, histograms and other image pre-processing, and opencv_highgui for reading, writing and displaying images, and using video streams from cameras and video files. That's all I can tell you without knowing how to run openCV on Windows, and not knowing which version of openCV You are using. As far as I know there is no way of building only some parts of openCV.
Generally from my experience You only need to add libraries associated to headers which You are using. So, if you have problems with tracking them try to avoid using #include "opencv2/opencv.hpp" and try a bit harder way of #include "opencv2/core/core.hpp" etc.
Yes, you can build OpenCV without certain library features. OpenCV uses CMake, which requires a little learning if you don't know it already, but you can uncheck OpenCV components you don't need in the CMake build configuration.
You can get away without using highgui in your app if you can read images with some other library (but not sure if you can build OpenCV without it).
Also - you will need to #include "opencv2/objdetect/objdetect.hpp" for support of Haar cascade classifiers (as of OpenCV 2.3.1).

Exporting mtl files from 3ds max problem

I'm having a small problem with exporting mtl files in 3ds max. I would like to use an obj + its material library in an opengl program. The model gets exported just fine, but I loose all the reflective/refractive parameters of my materials when I export them (colors and such seem to be fine so it finds them just not completely but all materials get changed to standard). I tried exporting materials from the scene and from the mat. library aswell with the same results. Could anyone help me how to keep the reflective parameters of materials after exporting?
As you stated in the email you use Glut as API. there are several tutorials for this,
I initially thought you used a engine, and just needed some values.
But you needed more then this.
To use reflection \ environment mapping in opengl you need a shader that supports this.
So what you need to do is to implement a reflection shader, and pass the bitmap into your shader.
In the sample file it worked fine for rendered image (because 3dsmax supports it's own raytrace materials), but this does not get exported in any way.

If I build and link an OpenGL application using only OpenGL ES 1.x calls, will it still work?

I am writing an OpenGL game which will hopefuflly be for both linux and iphoneOS, I basically want to be able to build using the OpenGL ES 1.5 headers and run it on my linux desktop. Can I do this? IE, I want to only use the subset of API calls common between OpenGL and OpenGL-ES.
Doing the above and linking with normal libGL.a from my system gets me my screen but I seem to be able to do nothing but change the scene background colour.
I've done exactly that, and it worked well for me.
There are a bunch OpenGL|ES extensions that aren't available on standard OpenGL but very nice to have on a low spec platform. glDrawTexImage is such an extension. Emulating these extensions using a hand full of desktop OpenGL-calls is not a big deal though.
Also OpenGL|ES supports the fixed-point data-format for most entrypoints. Take glClearColorx for example. These aren't available for the desktop OpenGL, so you have to write a wrapper if you want to use them. It's a bit more work if you also store your vertex data in this format.
Oh - and note that OpenGL|ES does not come with the glu-library. You can use it on the desktop, but if you do you'll have to reimplement them later (see the 100 questions about gluLookAt and gluUnproject).
There is no such thing as OpenGL ES 1.5. Did you mean 1.1 ?
Also, how do you get a window ? This is platform specific.
In any case, you still should compile against the header that corresponds to the lib you will link against. You don't know for sure what the header sets up (e.g. on windows, which you don't care about but still, calling conventions are specified in there).
There are also some calls that don't map well between the 2. E.g. APIs that are only using doubles in GL are float in GLES (from the ES spec):
The double-precision only commands
DepthRange, Frustum, and Ortho are
replaced with single-precision or
fixed-point variants
So in short, there is a bit more work than just using the same code, although the work in question is still minimal if you stick to GL ES subset.

Resources