Linux framebuffers with ARGB32. Alpha? How does a framebuffer support Alpha? - linux

After looking at the source for Qt, it seems that it, and framebuffers in general, support alpha transparency.
static QImage::Format determineFormat(const fb_var_screeninfo &info, int depth)
{
const fb_bitfield rgba[4] = { info.red, info.green,
info.blue, info.transp };
QImage::Format format = QImage::Format_Invalid;
switch (depth) {
case 32: {
const fb_bitfield argb8888[4] = {{16, 8, 0}, {8, 8, 0},
{0, 8, 0}, {24, 8, 0}};
const fb_bitfield abgr8888[4] = {{0, 8, 0}, {8, 8, 0},
{16, 8, 0}, {24, 8, 0}};
if (memcmp(rgba, argb8888, 4 * sizeof(fb_bitfield)) == 0) {
format = QImage::Format_ARGB32;
} else if (memcmp(rgba, argb8888, 3 * sizeof(fb_bitfield)) == 0) {
format = QImage::Format_RGB32;
} else if (memcmp(rgba, abgr8888, 3 * sizeof(fb_bitfield)) == 0) {
format = QImage::Format_RGB32;
// pixeltype = BGRPixel;
}
break;
}
// code ommited
}
What does it mean if a framebuffer supports alpha? Don't framebuffers typically represent monitors?
I am investigating the possibility of sending the alpha channel out HDMI for video overlay on an FPGA chip, similar to this users question.
I am wondering that if I have an external monitor, that some hows registers itself within linux to have a depth of 32bits with an alpha channel, with this get sent out the HDMI?

The alpha component is not transmitted to the monitor. But,
Alpha might be used by the compositor, allowing a window on screen to be transparent. For example, you can use the alpha channel in a WebGL framebuffer to show the document underneath the WebGL canvas.
You might use the alpha component in your application, even if the compositor doesn't use it.
It is more convenient to waste a byte of memory per pixel than it is to have an odd-sized pixel. Hardware framebuffers support a variety of 1, 2, and 4-channel formats, but only a few 3-channel formats.
The HDMI cable itself can carry a small variety of different video formats, such as RGB and YCbCr, with variations in subsampling and bit depth. The advantage to even-sized pixel formats does not apply to streamed data.

Related

Access framebuffers from compute shaders

I have a compute shader which renders an image. The image is basically a finished frame.
I would like to render said image on the screen. The most obvious way to do this is to instead of rendering to this image, rendering straight to the frambuffer. I have been told however, that this requires the storage bit to be set on the framebuffer, which is not the case on my machine.
The next best thing is then to copy over the image to the framebuffer. This requires the target bit to be set on the framebuffer image, which luckily happens to be the case on my machine.
However, when I try to copy into the framebuffer, Vulkan gives an access error, saying the framebuffer is not initialised.
AccessError {
error: ImageNotInitialized { requested: PresentSrc },
command_name: "vkCmdCopyImage",
command_param: "destination", command_offset: 4
}
If it matters, I am using the Rust bindings to Vulkan. The code is quite bulky, the entire thing is available on GitLab. The command buffer is created as follows:
let command_buffer = AutoCommandBufferBuilder::new(queue.device().clone(), queue.family())?
.clear_color_image(image.clone(), ClearValue::Float([1.0, 0.0, 0.0, 1.0]))?
.dispatch(
[(WIDTH / 16) as u32, (HEIGHT / 16) as u32, 1],
compute_pipeline.clone(),
set.clone(),
(),
)?
.copy_image(
image.clone(),
[0, 0, 0],
0,
0,
render_manager.images[next_image_index].clone(),
[0, 0, 0],
0,
0,
[WIDTH as u32, HEIGHT as u32, 1],
1,
)?
.build()?;
I know I could just render to a quad on the image, which is what I am doing now, but it's a lot of bulky code that doesn't do much.
The doc says about ImageNotInitialized:
Trying to use an image without transitionning it from the "undefined" or "preinitialized" layouts first.

How do I set volume in SuperCollider in Decibels?

I have a simple SinOsc which plays a 432 hz tone. I want to be able to set that tone to -97 dB. Here's what I have so far:
{
SinOsc.ar(432, 0, 0.01 /*edit this for volume*/, 0)
}.play;
Even though I can see how to edit volume, I don't see a way to set the precise dB level.
In case you are wondering why I am doing this, I need a tone to test 24-bit vs. 16-bit audio.
How can I set the precise dB level or get monitoring to show me what level I am at?
Ah, cool to see a SuperCollider question in Top Questions.
I believe the method you're looking for is .dbamp. See the docs.
Example: (from The SuperCollider Book, Chapter 2)
/* Figure 2.6 */
(
SynthDef(\UGen_ex6, {arg gate = 1, roomsize = 200, revtime = 450;
var src, env, gverb;
env = EnvGen.kr(Env([0, 1, 0], [1, 4], [4, -4], 1), gate, doneAction: 2);
src = Resonz.ar(
Array.fill(4, {Dust.ar(6)}),
1760 * [1, 2.2, 3.95, 8.76] +
Array.fill(4, {LFNoise2.kr(1, 20)}),
0.01).sum * 30.dbamp;
gverb = GVerb.ar(
src,
roomsize,
revtime,
// feedback loop damping
0.99,
// input bw of signal
LFNoise2.kr(0.1).range(0.9, 0.7),
// spread
LFNoise1.kr(0.2).range(0.2, 0.6),
// almost no direct source
-60.dbamp,
// some early reflection
-18.dbamp,
// lots of the tail
3.dbamp,
roomsize);
Out.ar(0, gverb * env)
}).add;
)
a = Synth(\UGen_ex6);
If that 0.01 value is the gain, then simply replace it with the result of
10^(-97/20) = 0.00001412537

Direct3D Window->Bounds.Width/Height differs from real resolution

I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.

How to interpret the field 'data' of an XImage

I am trying to understand how the data obtained from XGetImage is disposed in memory:
XImage img = XGetImage(display, root, 0, 0, width, height, AllPlanes, ZPixmap);
Now suppose I want to decompose each pixel value in red, blue, green channels. How can I do this in a portable way? The following is an example, but it depends on a particular configuration of the XServer and does not work in every case:
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++) {
unsigned long pixel = XGetPixel(img, x, y);
unsigned char blue = pixel & blue_mask;
unsigned char green = (pixel & green_mask) >> 8;
unsigned char red = (pixel & red_mask) >> 16;
//...
}
In the above example I am assuming a particular order of the RGB channels in pixel and also that pixels are 24bit-depth: in facts, I have img->depth=24 and img->bits_per_pixels=32 (the screen is also 24-bit depth). But this is not a generic case.
As a second step I want to get rid of XGetPixel and use or describe img->data directly. The first thing I need to know is if there is anything in Xlib which exactly gives me all the informations I need to interpret how the image is built starting from the img->data field, which are:
the order of R,G,B channels in each pixel;
the number of bits for each pixels;
the numbbe of bits for each channel;
if possible, a corresponding FOURCC
The shift is a simple function of the mask:
int get_shift (int mask) {
shift = 0;
while (mask) {
if (mask & 1) break;
shift++;
mask >>=1;
}
return shift;
}
Number of bits in each channel is just the number of 1 bits in its mask (count them). The channel order is determined by the shifts (if red shift is 0, the the first channel is R, etc).
I think the valid values for bits_per_pixel are 1, 2, 4, 8, 15, 16, 24 and 32 (15 and 16 bits are the same 2 bytes per pixel format, but the former has 1 bit unused). I don't think it's worth anyone's time to support anything but 24 and 32 bpp.
X11 is not concerned with media files, so no 4CC code.
This can be read from the XImage structure itself.
the order of R,G,B channels in each pixel;
This is contained in this field of the XImage structure:
int byte_order; /* data byte order, LSBFirst, MSBFirst */
which tells you whether it's RGB or BGR (because it only depends on the endianness of the machine).
the number of bits for each pixels;
can be obtained from this field:
int bits_per_pixel; /* bits per pixel (ZPixmap) */
which is basically the number of bits set in each of the channel masks:
unsigned long red_mask; /* bits in z arrangement */
unsigned long green_mask;
unsigned long blue_mask;
the numbbe of bits for each channel;
See above, or you can use the code from #n.m.'s answer to count the bits yourself.
Yeah, it would be great if they put the bit shift constants in that structure too, but apparently they decided not to, since the pixels are aligned to bytes anyway, in "standard order" (RGB). Xlib makes sure to convert it to that order for you when it retrieves the data from the X server, even if they are stored internally in a different format server-side. So it's always in RGB format, byte-aligned, but depending on the endianness of the machine, the bytes inside an unsigned long can appear in a reverse order, hence the byte_order field to tell you about that.
So in order to extract these channels, just use the 0, 8 and 16 shifts after masking with red_mask, green_mask and blue_mask, just make sure you shift the right bytes depending on the byte_order and it should work fine.

Am I doing something wrong, or do Intel graphic cards suck so bad?

I have
VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) on Ubuntu 10.10 Linux.
I'm rendering statically one VBO per frame. This VBO has 30,000 triangles, with 3 lights and one texture, and I'm getting 15 FPS.
Are intel cards so bad, or am I doing sth wrong?
Drivers are standard, open source drivers from intel.
My code:
void init() {
glGenBuffersARB(4, vbos);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, vertXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 4, colorRGBA, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, normXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 2, texXY, GL_STATIC_DRAW_ARB);
}
void draw() {
glPushMatrix();
const Vector3f O = ps.getPosition();
glScalef(scaleXYZ[0], scaleXYZ[1], scaleXYZ[2]);
glTranslatef(O.x() - originXYZ[0], O.y() - originXYZ[1], O.z()
- originXYZ[2]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glColorPointer(4, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glNormalPointer(GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
texture->bindTexture();
glDrawArrays(GL_TRIANGLES, 0, verticesNum);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //disabling VBO
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
}
EDIT: maybe it's not clear - initialization is in different function, and is only called once.
A few hints:
Using that number of vertices you should interleave the arrays. Vertex caches usually don't hold more than 1000 entries. Interleaving the data of course implies that the data is hold by a single VBO.
Using glDrawArrays is suboptimal if there are a lot of shared vertices, which is likely the case for a (static) terrain. Instead draw using glDrawElements. You can use the index array to implement some cheap LOD
Experiment with the number of vertices in the index buffer given to glDrawArrays. Try batches of at most 2^14, 2^15 or 2^16 indices. This is again to relieve cache pressure.
Oh and in your code the last two lines
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
I think you meant those to be glDisableClientState.
Make sure your system has OpenGL acceleration enabled:
$ glxinfo | grep rendering
direct rendering: Yes
If you get 'no', then you don't have OpenGL acceleration.
Thanks fo answers.
Yeah, I have direct rendering on, according to glxinfo. In glxgears I get sth like 150 FPS, and games like Warzone or glest works fast enough. So probably problem is in my code.
I'll buy some real graphic card eventually anyway, but I wanted my game to work on integrated graphic cards too, that's why I posted this question.

Resources