Direct3D 11 texture in ABGR format when RGBA requested - direct3d

I am trying to learn how to do texture mapping in Direct3D 11.
I have successfully mapped a texture onto a quad. The problem is that Direct3D is interpreting my pixel data as ABGR, when I specifically requested DXGI_FORMAT_R8G8B8A8, and I don't understand why.
Here is the code I use to create my ID3D11Texture2D object:
UINT pixels[] = {
0xff00ff00, 0xff0000ff,
0xffff0000, 0xffffffff,
};
D3D11_SUBRESOURCE_DATA subresourceData;
subresourceData.pSysMem = pixels;
subresourceData.SysMemPitch = 8;
subresourceData.SysMemSlicePitch = 16;
D3D11_TEXTURE2D_DESC texture2dDesc;
texture2dDesc.Width = 2;
texture2dDesc.Height = 2;
texture2dDesc.MipLevels = 1;
texture2dDesc.ArraySize = 1;
texture2dDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texture2dDesc.SampleDesc.Count = 1;
texture2dDesc.SampleDesc.Quality = 0;
texture2dDesc.Usage = D3D11_USAGE_DEFAULT;
texture2dDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texture2dDesc.CPUAccessFlags = 0;
texture2dDesc.MiscFlags = 0;
ID3D11Texture2D *texture;
device->CreateTexture2D(&texture2dDesc, &subresourceData, &texture);
The code for creating the ID3D11ShaderResourceView object:
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
memset(&shaderResourceViewDesc, 0, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
shaderResourceViewDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
ID3D11ShaderResourceView *shaderResourceView;
device->CreateShaderResourceView(texture, &shaderResourceViewDesc, &shaderResourceView);
The code for creating the ID3D11SamplerState object:
D3D11_SAMPLER_DESC samplerDesc;
memset(&samplerDesc, 0, sizeof(D3D11_SAMPLER_DESC));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
ID3D11SamplerState *samplerState;
device->CreateSamplerState(&samplerDesc, &samplerState);
And finally the HLSL pixel shader:
Texture2D image;
SamplerState samplerState;
float4 PixelMain(float4 position: SV_POSITION, float2 texel: TEXCOORD): SV_TARGET {
return image.Sample(samplerState, texel);
}
The colors of the rendered texture are
green red
blue white
But they should be
black red
black white
And, of course, I remembered to call deviceContext->PSSetShaderResources(0, 1, &shaderResourceView); and deviceContext->PSSetSamplers(0, 1, &samplerState);
Can anyone help me understand what's going on here?

This is an endianness issue. Most processors you're likely to deal with store integers in "little-endian" order. Which means that an integral value of 0xAABBCCDD ends up in memory as the four bytes 0xDD, 0xCC, 0xBB, 0xAA, in that order. I know it's counter-intuitive, so I encourage you to use the Visual Studio memory window to view how your pixels array end up represented in memory.
Direct3D/your GPU does not consider a pixel as a 32-bit integer, but as four consecutive 8-bit integers, so it won't swap the bytes again when reading from the texture and will see an effective pixel value of 0xDDCCBBAA. The solution is to specify your image in a byte array with four entries per pixel, one per component. In your case:
BYTE pixels[] = {
0xff, 0x00, 0xff, 0x00, 0xff, 0x00, 0x00, 0xff,
0xff, 0xff, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff,
};
Also, the SysMemSlicePitch parameter is irrelevant for 2D textures. No need to set it to 16. And an alpha value of zero does not make your pixels black. You need to turn on and play with the blending states to achieve that.

Related

How do I reduce the color palette of an image in Python 3?

I've seen answers for Python 2, but support for it has ended and the Image module for Python2 is incompatible with aarch64.
I'm just looking to convert some images to full CGA color, without the hassle of manually editing the image.
Thanks in advance!
Edit: I think I should mention that I'm still pretty new to Python. I think I should also mention I'm trying to use colors from a typical CGA adapter.
Here's one way to do it with PIL - you may want to check I have transcribed the list of colours accurately from here::
#!/usr/bin/env python3
from PIL import Image
# Create new 1x1 palette image
CGA = Image.new('P', (1,1))
# Make a CGA palette and push it into image
CGAcolours = [
0x00, 0x00, 0x00,
0x00, 0x00, 0xaa,
0x00, 0xaa, 0x00,
0x00, 0xaa, 0xaa,
0xaa, 0x00, 0x00,
0xaa, 0x00, 0xaa,
0xaa, 0x55, 0x00,
0xaa, 0xaa, 0xaa,
0x55, 0x55, 0x55,
0x55, 0x55, 0xff,
0x55, 0xff, 0x55,
0x55, 0xff, 0xff,
0xff, 0x55, 0x55,
0xff, 0x55, 0xff,
0xff, 0xff, 0x55,
0xff, 0xff, 0xff]
CGA.putpalette(CGAcolours)
# Open our image of interest
im = Image.open('swirl.jpg')
# Quantize to our lovely CGA palette, without dithering
res = im.quantize(colors=len(CGAcolours), palette=CGA, dither=Image.Dither.NONE)
res.save('result.png')
That transforms this:
into this:
Or, if you don't really want to write any Python, you can do it on the command-line with ImageMagick in a highly analogous way:
# Create 16-colour CGA swatch in file "cga16.gif"
magick xc:'#000000' xc:'#0000AA' xc:'#00AA00' xc:'#00AAAA' \
xc:'#AA0000' xc:'#AA00AA' xc:'#AA5500' xc:'#AAAAAA' \
xc:'#555555' xc:'#5555FF' xc:'#55FF55' xc:'#55FFFF' \
xc:'#FF5555' xc:'#FF55FF' xc:'#FFFF55' xc:'#FFFFFF' \
+append cga16.gif
That creates this 16x1 swatch (enlarged so you can see it):
# Remap colours in "swirl.jpg" to the CGA palette
magick swirl.jpg +dither -remap cga16.gif resultIM.png
Doubtless the code could be converted for EGA, VGA and the other early PC hardware palettes... or even to the Rubik's Cube palette:
Resulting in:
Or the Bootstrap palette:
Or the Google palette:

Display "Hello World" on framebuffer in linux

I have used the linux 3.14 version on my ARM target and i want to show some line of characters in the display using frame buffer. I can change the colors of the display using the below code.
#include <stdio.h>
unsigned char colours[8][4] = {
{ 0x00, 0xFF, 0x00, 0xFF }, // green
{ 0x00, 0xFF, 0x00, 0xFF }, // green
{ 0x00, 0xFF, 0x00, 0xFF }, // green
{ 0x00, 0xFF, 0x00, 0xFF }, // green
{ 0x00, 0xFF, 0x00, 0xFF }, // green
{ 0x00, 0xFF, 0x00, 0xFF }, // green
{ 0x00, 0xFF, 0x00, 0xFF }, // green
{ 0x00, 0xFF, 0x00, 0xFF }, // green
};
int frames[] = {0,5,10,15,20,25,30};
int columns = 800;
int lines = 480;
#define ARRAY_SIZE(a) (sizeof(a)/sizeof(a[0]))
int frame(int c, int l){
int i;
for(i=0; i < ARRAY_SIZE(frames); i++){
if((c==frames[i])&&((l>=frames[i])&&l<=(lines-frames[i]))){
return 1;
}
if((c==columns-frames[i])&&((l>=frames[i])&&l<=(lines-frames[i]))){
return 1;
}
if((l==frames[i])&&((c>=frames[i])&&c<=(columns-frames[i]))){
return 1;
}
if((l==lines-frames[i])&&((c>=frames[i])&&c<=(columns-frames[i]))){
return 1;
}
}
return 0;
}
int main(int argc, char **argv)
{
unsigned char pixel[3];
int l, c;
char *filename = argv[1];
printf ("Device : %s\n",filename);
FILE *f = fopen(filename,"wb");
if(f){
printf("Device open success \n");
for(l=0; l<lines; l++){
for(c=0; c < columns; c++){
if(frame(c,l)){
fwrite(colours[3], 1, sizeof(colours[3]), f);
}else{
int colour = c/(columns/ARRAY_SIZE(colours));
fwrite(colours[colour], 1, sizeof(colours[colour]), f);
}
}
}
fclose(f);
}
else
printf("Device open failed \n");
return 0;
}
In the same way i want to show some lines of character to the display. Example, I want to show characters "Hello world !!!" in the display using frame buffer.
Could any one help me to work it out.
You can find an elegant piece of code to do this in tslib. tslib is a c library for filtering touchscreen events. Actually, you don't need tslib for your purpose (yes, you don't have to build it). In their tests you can find a utility to access the framebuffer.
They have provided the fbutils.h whose implementation you can find in fbutils-linux.c. This code is very simple in that it directly manipulates the linux framebuffer and does not have any dependencies. Currently it's not even 500 lines, and if you only want to display text, you can remove other irrelevant functionality. It supports two fonts - font_8x8 and font_8x16 - whose definitions you can find in the respective .c files.
I won't go into code details as it is easy to understand. Will just list the current API and provide a simpler code for open and close functionality.
int open_framebuffer(void);
void close_framebuffer(void);
void setcolor(unsigned colidx, unsigned value);
void put_cross(int x, int y, unsigned colidx);
void put_string(int x, int y, char *s, unsigned colidx);
void put_string_center(int x, int y, char *s, unsigned colidx);
void pixel(int x, int y, unsigned colidx);
void line(int x1, int y1, int x2, int y2, unsigned colidx);
void rect(int x1, int y1, int x2, int y2, unsigned colidx);
void fillrect(int x1, int y1, int x2, int y2, unsigned colidx);
To manipulate the linux framebuffer, first you should memory map it into your process address space. After memory mapping you can access it just like an array. Using some ioctl you can get information about the framebuffer such as resolution, bytes-per-pixel etc. See here for details.
In the code below, you can pass the name of the fb device to open it, such as /dev/fb0. You can use the rest of the functions in the original code for drawing.
int open_framebuffer(const char *fbdevice)
{
uint32_t y, addr;
fb_fd = open(fbdevice, O_RDWR);
if (fb_fd == -1) {
perror("open fbdevice");
return -1;
}
if (ioctl(fb_fd, FBIOGET_FSCREENINFO, &fix) < 0) {
perror("ioctl FBIOGET_FSCREENINFO");
close(fb_fd);
return -1;
}
if (ioctl(fb_fd, FBIOGET_VSCREENINFO, &var) < 0) {
perror("ioctl FBIOGET_VSCREENINFO");
close(fb_fd);
return -1;
}
xres_orig = var.xres;
yres_orig = var.yres;
if (rotation & 1) {
/* 1 or 3 */
y = var.yres;
yres = var.xres;
xres = y;
} else {
/* 0 or 2 */
xres = var.xres;
yres = var.yres;
}
fbuffer = mmap(NULL,
fix.smem_len,
PROT_READ | PROT_WRITE, MAP_FILE | MAP_SHARED,
fb_fd,
0);
if (fbuffer == (unsigned char *)-1) {
perror("mmap framebuffer");
close(fb_fd);
return -1;
}
memset(fbuffer, 0, fix.smem_len);
bytes_per_pixel = (var.bits_per_pixel + 7) / 8;
transp_mask = ((1 << var.transp.length) - 1) <<
var.transp.offset; /* transp.length unlikely > 32 */
line_addr = malloc(sizeof(*line_addr) * var.yres_virtual);
addr = 0;
for (y = 0; y < var.yres_virtual; y++, addr += fix.line_length)
line_addr[y] = fbuffer + addr;
return 0;
}
void close_framebuffer(void)
{
memset(fbuffer, 0, fix.smem_len);
munmap(fbuffer, fix.smem_len);
close(fb_fd);
free(line_addr);
xres = 0;
yres = 0;
rotation = 0;
}
You can find examples of its usage in test programs in the folder, such as ts_test.c.
You can extend this code to support other fonts, display images etc.
Good luck!
First, I strongly suggest to avoid use of fopen/fwrite function to access devices. These function handle internal buffers that can be troublesome. Prefers functions open and write.
Next, you can't continue with series of if .. then .. else .. to render a true graphic. You need to allocate a buffer that represent your framebuffer. Its size will, be columns * lines * 4 (you need 1 byte per primary color). To write a pixel, you have to use something like:
buf[l * columns + c * 4 + 0] = red_value;
buf[l * columns + c * 4 + 1] = green_value;
buf[l * columns + c * 4 + 2] = blue_value;
buf[l * columns + c * 4 + 3] = alpha_value;
Once you buffer is fully filled, write it with:
write(fd, buf, sizeof(buf));
(where fd is file descriptor return by fd = open("/dev/fbdev0", O_WRONLY);)
Check that you are now able to set arbitrary pixels on our framebuffer.
Finally, you need a database of rendered characters. You could create it yourself, but I suggest to use https://github.com/dhepper/font8x8.
Fonts are monochrome so each bit represent one pixel. On your framebuffer, you need 4bytes for one pixel. So you will have to do some conversion.
This is a really basic way to access framebuffer, there are plenty of improvements to do:
columns, lines and pixel representation should negotiated/retrieved using FBIO*ET_*SCREENINFO ioctl.
using write to access framebuffer is not the preferred method. It is slow and does not allow to updating framebuffer easily. The preferred method use mmap.
if you want to to animate framebuffer, you to use a double buffer: allocate a buffer twice larger than necessary, write alternatively first part or second part and update shown buffer with FBIOPAN_DISPLAY
font8x8 is not ideal. You may want to use any other font available on web. You need a library to decode font format (libfreetype) and a library to render a glyph (= a letter) in a particular size to a buffer (aka rasterize step) that you can copy to your screen (libpango)
you may want to accelerate buffer copy between your glyph database and your screen framebuffer (aka compose step), but it is a far longer story that involve true GPU drivers

Reading code from RFID card

I have a problem with reading code from RFID card.
Any conversion algorithm exist?
Examples of codes:
04006d0ba0 -> 00008596950352
0d001c59b3 -> 00047253268956
0d001c5134 -> 00047253268674
0d001c9317 -> 00047253265550
0d001c93ed -> 00047253265531
0d001c1b12 -> 00047253261700
0d001c1b1d -> 00047253261707
e800ef0aac -> 00485339628883
Same RFID card, different outputs from different readers...
I know that topic like that exist yet, but i think that is not same problem...
The conversion looks quite simple:
Let's assume that you want to convert "04006d0ba0" to "00008596950352".
Take each nibble from the hexadecimal number "04006d0ba0" (i.e. "0", then "4", then "0", then "0", then "6", ...)
Reverse the bits of each nibble (least significant bit becomes most significant bit, second bit becomes second last bit), e.g. "0" (= 0000) remains "0" (= 0000), "4" (= 0100) becomes "2" (= 0010), "6" (= 0110) remains "6" (= 0110), etc.
Convert into decimal number format.
In Java, this could look something like this:
private static final byte[] REVERSE_NIBBLE = {
0x00, 0x08, 0x04, 0x0C, 0x02, 0x0A, 0x06, 0x0E,
0x01, 0x09, 0x05, 0x0D, 0x03, 0x0B, 0x07, 0x0F
};
private long convert(byte[] input) {
byte[] output = new byte[input.length];
for (int i = 0; i < input.length; ++i) {
output[i] = (byte)((REVERSE_NIBBLE[(output[i] >>> 4) & 0x0F] << 4) |
REVERSE_NIBBLE[ output[i] & 0x0F]);
}
return new BigInteger(1, output).longValue();
}

Setting values of bytearray

I have a BYTE data[3]. The first element, data[0] has to be a BYTE with very specific values which are as follows:
typedef enum
{
SET_ACCURACY=0x01,
SET_RETRACT_LIMIT=0x02,
SET_EXTEND_LIMT=0x03,
SET_MOVEMENT_THRESHOLD=0x04,
SET_STALL_TIME= 0x05,
SET_PWM_THRESHOLD= 0x06,
SET_DERIVATIVE_THRESHOLD= 0x07,
SET_DERIVATIVE_MAXIMUM = 0x08,
SET_DERIVATIVE_MINIMUM= 0x09,
SET_PWM_MAXIMUM= 0x0A,
SET_PWM_MINIMUM = 0x0B,
SET_PROPORTIONAL_GAIN = 0x0C,
SET_DERIVATIVE_GAIN= 0x0D,
SET_AVERAGE_RC = 0x0E,
SET_AVERAGE_ADC = 0x0F,
GET_FEEDBACK=0x10,
SET_POSITION=0x20,
SET_SPEED= 0x21,
DISABLE_MANUAL = 0x30,
RESET= 0xFF,
}TYPE_CMD;
As is, if I set data[0]=SET_ACCURACY it doesn't set it to 0x01, it sets it to 1, which is not what I want. data[0] must take the value 0x01 when set it equal to SET_ACCURACY. How do I make it so that it does this for not only SET_ACCURACY, but all the other values as well?
EDIT: Actually this works... I had a different error in my code that I attributed to this. Sorry!
Thanks!
"data[0]=SET_ACCURACY doesn't set it to 0x01, it sets it to 1"
It assigns value of SET_ACCURACY to the data[0], which means that bits 00000001 are stored into memory at address &data[0]. 0x01 is hexadecimal representation of this byte, 1 is its decimal representation.

Changing pixel color in SDL

I'm making a game that only uses PNG with black and white pixels. But there are some times where I would want to change the color of the white pixels to something different, like green (#00FF00).
How would I go about doing this exactly?
EDIT:
Okay, I figured out a solution
Here is a simple function to do so:
void setColor(SDL_Surface *surface, SDL_Color color) {
Uint16 *pixels = (Uint16 *) surface->pixels; // Get the pixels from the Surface
// Iterrate through the pixels and chagne the color
for (int i = 0; i w * surface->h); i++) {
if (pixels[i] == SDL_MapRGB(surface->format, 0xFF, 0xFF, 0xFF)) // Search for white pixels
pixels[i] = SDL_MapRGB(surface->format, color.r, color.b, color.g);
}
}
Something to keep in mind, change the "Uint16" to "Uint32" if you are using a 32-Bit surface, or "Uint8" for a 8-Bit surface.
I'm not sure on how fast this code is, but it gets the job done.
That depends on exactly what you're trying to set the color of.
Without more information, two APIs that come immediately to mind are "SDL_SetColors()" and "SDL_SetPalette()".
But the real answer is "it depends".

Resources