layoutsxxxhdpi has difference in few of devices of android studio? - android-studio

I am new to android studio , i have set layout of xxxhdpi devices . Devices Pixel XL, Nexus 6, Nexus 6p have size(1440*2560) 560 dpi and their layouts are fine but Pixel XL 2 has size (1440*2880) 560 dpi layout is pushing downward , how to reduce difference for that?

For keeping all these ‘screen resolution and density’ combinations simpler, Android wants us to consider them as DIP or DP units (Density Independent Pixels), which are corrected for density. For designing/development perspective, Android treats any device as its DP unit size instead of its actual pixel size. Android calculates DP size by using the following formula:
DP size of any device is (actual resolution/density conversion factor).
Density conversion factor for density buckets are as follows:
ldpi: 0.75
mdpi: 1.0 (base density)
hdpi: 1.5
xhdpi: 2.0
xxhdpi: 3.0
xxxhdpi: 4.0
Examples of resolution/density conversion to DP:
ldpi device of 240 X 320 px will be of 320 X 426.66 DP.
240 / 0.75 = 320 dp
320 / 0.75 = 426.66 dp
xxhdpi device of 1080 x 1920 pixels (Samsung S4, S5) will be of 360 X 640 dp.**
1080 / 3 = 360 dp
1920 / 3 = 640 dp
follow this link, http://vinsol.com/blog/2014/11/20/tips-for-designers-from-a-developer/

Related

Transformed colors when painting semi-transparent in p5.js

A transformation seems to be applied when painting colors in p5.js with an alpha value lower than 255:
for (const color of [[1,2,3,255],[1,2,3,4],[10,11,12,13],[10,20,30,40],[50,100,200,40],[50,100,200,0],[50,100,200,1]]) {
clear();
background(color);
loadPixels();
print(pixels.slice(0, 4).join(','));
}
Input/Expected Output Actual Output (Firefox)
1,2,3,255 1,2,3,255 ✅
1,2,3,4 0,0,0,4
10,11,12,13 0,0,0,13
10,20,30,40 6,19,25,40
50,100,200,40 51,102,204,40
50,100,200,0 0,0,0,0
50,100,200,1 0,0,255,1
The alpha value is preserved, but the RGB information is lost, especially on low alpha values.
This makes visualizations impossible where, for example, 2D shapes are first drawn and then the visibility in certain areas is animated by changing the alpha values.
Can these transformations be turned off or are they predictable in any way?
Update: The behavior is not specific to p5.js:
const ctx = new OffscreenCanvas(1, 1).getContext('2d');
for (const [r,g,b,a] of [[1,2,3,255],[1,2,3,4],[10,11,12,13],[10,20,30,40],[50,100,200,40],[50,100,200,0],[50,100,200,1]]) {
ctx.clearRect(0, 0, 1, 1);
ctx.fillStyle = `rgba(${r},${g},${b},${a/255})`;
ctx.fillRect(0, 0, 1, 1);
console.log(ctx.getImageData(0, 0, 1, 1).data.join(','));
}
I could be way off here...but it looks like internally that in the background method if _isErasing is true then blendMode is called. By default this will apply a linear interpolation of colours.
See https://github.com/processing/p5.js/blob/9cd186349cdb55c5faf28befff9c0d4a390e02ed/src/core/p5.Renderer2D.js#L45
See https://p5js.org/reference/#/p5/blendMode
BLEND - linear interpolation of colours: C = A*factor + B. This is the
default blending mode.
So, if you set the blend mode to REPLACE I think it should work.
REPLACE - the pixels entirely replace the others and don't utilize
alpha (transparency) values.
i.e.
blendMode(REPLACE);
for (const color of [[1,2,3,255],[1,2,3,4],[10,11,12,13],[10,20,30,40],[50,100,200,40],[50,100,200,0],[50,100,200,1]]) {
clear();
background(color);
loadPixels();
print(pixels.slice(0, 4).join(','));
}
Internally, the HTML Canvas stores colors in a different way that cannot preserve RGB values when fully transparent. When writing and reading pixel data, conversions take place that are lossy due to the representation by 8-bit numbers.
Take for example this row from the test above:
Input/Expected Output Actual Output
10,20,30,40 6,19,25,40
IN (conventional alpha)
R
G
B
A
values
10
20
30
40 (= 15.6%)
Interpretation: When painting, add 15.6% of (10,20,30) to the 15.6% darkened (r,g,b) background.
Canvas-internal (premultiplied alpha)
R
G
B
A
R
G
B
A
calculation
10 * 0.156
20 * 0.156
30 * 0.156
40 (= 15.6%)
values
1.56
3.12
4.7
40
values (8-bit)
1
3
4
40
Interpretation: When painting, add (1,3,4) to the 15.6% darkened (r,g,b) background.
Premultiplied alpha allows faster painting and supports additive colors, that is, adding color values without darkening the background.
OUT (conventional alpha)
R
G
B
A
calculation
1 / 0.156
3 / 0.156
4 / 0.156
40
values
6.41
19.23
25.64
40
values (8-bit)
6
19
25
40
So the results are predictable, but due to the different internal representation, the transformation cannot be turned off.
The HTML specification explicitly mentions this in section 4.12.5.1.15 Pixel manipulation:
Due to the lossy nature of converting between color spaces and converting to and from premultiplied alpha color values, pixels that have just been set using putImageData(), and are not completely opaque, might be returned to an equivalent getImageData() as different values.
see also 4.12.5.7 Premultiplied alpha and the 2D rendering context

JPEG-XL: Handling of palette in libjxl command line tools

I am trying to make sense of the following presentation, see page 27:
Could someone please describe the command line tools available in libjxl that can help me work with existing palettes ?
I tried a naive:
% convert -size 512x512 -depth 8 xc:white PNG8:white8.png
% convert -size 512x512 -depth 8 xc:white PNG24:white24.png
which gives me the exected:
% file white8.png white24.png
white8.png: PNG image data, 512 x 512, 8-bit colormap, non-interlaced
white24.png: PNG image data, 512 x 512, 8-bit/color RGB, non-interlaced
But then:
% cjxl -d 0 white8.png white8.jxl
% cjxl -d 0 white24.png white24.jxl
Gives:
% md5sum white8.jxl white24.jxl
68c88befec21604eab33f5e691a2a667 white8.jxl
68c88befec21604eab33f5e691a2a667 white24.jxl
where
% jxlinfo white8.jxl
dimensions: 512x512
have_container: 0
uses_original_profile: 1
bits_per_sample: 8
have_preview: 0
have_animation: 0
intrinsic xsize: 512
intrinsic ysize: 512
orientation: 1 (Normal)
num_color_channels: 3
num_extra_channels: 0
color profile:
format: JPEG XL encoded color profile
color_space: 0 (RGB color)
white_point: 1 (D65)
primaries: 1 (sRGB)
transfer_function: gamma: 0.454550
rendering_intent: 0 (Perceptual)
frame:
still frame, unnamed
I also tried:
% cjxl -d 0 --palette=1024 white24.png palette.jxl
which also gives:
% md5sum palette.jxl
68c88befec21604eab33f5e691a2a667 palette.jxl
The libjxl encoder either takes a JPEG bitstream as input (for the special case of lossless JPEG recompression), or pixels. It does not make any difference if those pixels are given via a PPM file, a PNG8 file, a PNG24 file, an RGB memory buffer, or any other way, if the pixels are the same, the result will be the same.
In your example, you have an image that is just solid white, so it will be encoded the same way regardless of how you pass it to cjxl.
Now if those pixels happen to use only few colors, as will be the case for PNG8 since there can be at most 256 colors in that case, the encoder (at a default effort setting) will detect this and use the jxl Palette transform to represent the image more compactly. In jxl, palettes can have arbitrary sizes, there is no limit to 256 colors. The --palette option in cjxl can be used to set the maximum number of colors for which it will still use the Palette transform — if the input image has more colors than that, it will not use Palette.
The use of Palette is considered an internal encoding tool in jxl, not part of the externally exposed image metadata. It can be used by the encoder to effectively recompress PNG8 files, but by no means will it necessarily always use that encoding tool when the input is PNG8, and it might also use Palette when the input has more than 256 colors. The Palette transform of jxl is quite versatile, it can also be applied to individual channels, to more or less than 3 channels, and palette entries can be not only specific colors but also so-called "delta palette entries" which are not a color but signed pixel values that get added to the predicted pixel value.
As explained by Jon Sneyers just above the palette is an internal encoding tool. I was confused by this, as I could not see any difference in the output of the jxlinfo command line.
So I ran the following experience on my side to convince myself:
$ cjxl -d 0 --palette=257 palette.png palette.257.jxl
$ cjxl -d 0 --palette=256 palette.png palette.256.jxl
$ cjxl -d 0 --palette=255 palette.png palette.255.jxl
Lead to:
% md5sum palette.*.jxl
e925521cbb976dce2646354ea3deee3b palette.255.jxl
8d241b94d67aeb2706a1aad7aed55cc7 palette.256.jxl
8d241b94d67aeb2706a1aad7aed55cc7 palette.257.jxl
Where:
% du -sb palette.*.jxl
89616 palette.255.jxl
45627 palette.256.jxl
45627 palette.257.jxl
In all case jxlinfo reveals:
% jxlinfo palette.255.jxl
dimensions: 256x256
have_container: 0
uses_original_profile: 1
bits_per_sample: 8
have_preview: 0
have_animation: 0
intrinsic xsize: 256
intrinsic ysize: 256
orientation: 1 (Normal)
num_color_channels: 3
num_extra_channels: 0
color profile:
format: JPEG XL encoded color profile
color_space: 0 (RGB color)
white_point: 1 (D65)
primaries: 1 (sRGB)
transfer_function: 13 (sRGB)
rendering_intent: 0 (Perceptual)
frame:
still frame, unnamed
With:
% pnginfo palette.png
palette.png...
Image Width: 256 Image Length: 256
Bitdepth (Bits/Sample): 8
Channels (Samples/Pixel): 1
Pixel depth (Pixel Depth): 8
Colour Type (Photometric Interpretation): PALETTED COLOUR (0 colours, 0 transparent)
Image filter: Single row per byte filter
Interlacing: No interlacing
Compression Scheme: Deflate method 8, 32k window
Resolution: 0, 0 (unit unknown)
FillOrder: msb-to-lsb
Byte Order: Network (Big Endian)
Number of text strings: 0

How to represent 45 degree and 26.565 degree angle in 32 bit binary form?

I am writing the verilog code for CORDIC (COordinate Rotation DIgital Computer) in xilinx vivado. For that I need 45, 26.565 degree rotation angle in 32-bit binary form. After searching in the internet I got 45 degree angle can be represented as
assign z[00] = 'b00100000000000000000000000000000;
and 26.565 degree angle can be represented as
assign z[01] = 'b00010010111001000000010100011101;
Can anybody explain to me how they are representing the 45 degree and 26.565 degree angle in binary form? Is there any formula behind it?
round((45 / 360) * 2 ** 32) is equal to 'b00100000000000000000000000000000 (exacly your number)
round((26.565 / 360) * 2 ** 32) is equal to 'b00010010111001000000001010111011 (almost your number, difference 0.00005 degree)
Formula propably is equal to (angle / 360) * 2 ** 32.

Indexed color memory size vs raw image

In this article https://en.m.wikipedia.org/wiki/Indexed_color
It says this:
Indexed color images with palette sizes beyond 256 entries are rare. The practical limit is around 12-bit per pixel, 4,096 different indices. To use indexed 16 bpp or more does not provide the benefits of the indexed color images' nature, due to the color palette size in bytes being greater than the raw image data itself. Also, useful direct RGB Highcolor modes can be used from 15 bpp and up.
I don't undestand why the indexed 16 bpp or more is inefficient in terms of memory
Because in this article there is also this:
Indexed color saves a lot of memory, storage space, and transmission time: using truecolor, each pixel needs 24 bits, or 3 bytes. A typical 640×480 VGA resolution truecolor uncompressed image needs 640×480×3 = 921,600 bytes (900 KiB). Limiting the image colors to 256, every pixel needs only 8 bits, or 1 byte each, so the example image now needs only 640×480×1 = 307,200 bytes (300 KiB), plus 256×3 = 768 additional bytes to store the palette map in itself (assuming RGB), approximately one third of the original size. Smaller palettes (4-bit 16 colors, 2-bit 4 colors) can pack the pixels even more (to one sixth or one twelfth), obviously at cost of color accuracy.
If i have 640x480 resolution and if i want to use 16-bit palette:
640x480x2(16 bits == 2 bytes) + 65536(2^16)*3(rgb)
614400 + 196608 = 811008 bytes
Raw image memory size:
640x480x3(rgb)
921600 bytes
So 811008 < 921600
And if i have 1920x1080 reolution:
Raw image: 1920x1080x3 = 6 220 800
Indexed color:
1920x1080x2 + palette size(2**16 * 3)
4147200 + 196608
4343808 bytes
So again indexed color is efficien in terms of memory. I don’t get it, why in this article is says it is inefficient.
It really depends upon the size of the image. As you said, if b is the number of bytes per pixel and p is the number of pixels, then the image data size i is:
i = p * b
And the color table size t is:
t = 2^(b * 8) * 3
So the point where a raw image would take the same space as an indexed image is:
p * 3 = p * b + 2^(b * 8) * 3
Which I'll now solve for p:
p * 3 - p * b = 2^(b * 8) * 3
p * (3 - b) = 2^(b * 8) * 3
p = (2^(b * 8) * 3) / (3 - b)
So for various bytepp, the minimum image size that will make using indexed images break even:
1 bytepp (8 bit) - 384 pixels (like an image of 24 x 16)
1.5 bytepp (12 bit) - 8192 pixels (like an image of 128 x 64)
2 bytepp (16 bit) - 196,604 pixels (like an image of 512 x 384)
2.5 bytepp (20 bit) - 6,291,456 pixels (like an image of 3072 x 2048)
2.875 bytepp (23 bit) - 201,326,592 (like an image of 16,384 x 12,288)
If you are using an image smaller than 512 x 384, 16 bit per pixel indexed color would take up more space than raw 24 bit image data.

Confused between actual resolution/ interpolation bitmap format

I am using Logitech pro9000 HD webcam. Which have 2 mp zeiss lence and can capture HD video etc. blaa blaa
My code [not exactly the same but integrated in single function.
Now the problem if I use resolution up to 1600 x 1200 everything works fine. And received byte size as follows
for 640 x 480 VideoHeader.dwBytesUsed are 921600
for 1600 x 1200 VideoHeader.dwBytesUsed are 5760000
from 1600 x 1200 to 3264 x 2448 VideoHeader.dwBytesUsed are 5760000
But for higher resolution from 1600 x 1200 the byte size is same as 1600 x 1200 but my program can’t covert that data to bitmap I event try to set size of bitmap to 1600 x 1200 but nothing works I get only fuzzy at bottom or stretched multiple images at bottom of preview bitmap.
I know this is called interpolation
My question is where the interpolations is implemented actually in driver which I am accessing or the camera application given by company
Means do I am getting the interpolated data or I have to implement the algorithm in my program.
What confused me is if driver is still returning 1600 x 1200 images and software from Logitech is interpolating image to 3264 x 2448 size if this is a case then why I am not getting the 1600 x 1200 image from device event I set video format at init code to 3264 x 2448
[I have set bit to 24 and camera is using Format24bppRgb Pixel Format]
can anyone help me !....
my code is
Private Sub FrameCallBack(ByVal lwnd As IntPtr, ByVal lpVHdr As IntPtr)
Dim _SnapSize As Size = New Size(640, 480)
'Dim _SnapSize As Size = New Size(1600, 1200)
Dim _SnapSize As Size = New Size(3264, 2448)
Dim VideoHeader As New Avicap.VIDEOHDR
Dim VideoData(-1) As Byte
VideoHeader = CType(Avicap.GetStructure(lpVHdr, VideoHeader), Avicap.VIDEOHDR)
VideoData = New Byte(VideoHeader.dwBytesUsed - 1) {}
Marshal.Copy(VideoHeader.lpData, VideoData, 0, VideoData.Length)
Dim _SnapFormat As System.Drawing.Imaging.PixelFormat = PixelFormat.Format24bppRgb
Dim outBit As Bitmap
If Me.IsValidData Then
outBit = New Bitmap(_SnapSize.Width, _SnapSize.Height, _SnapFormat)
Dim bitData As BitmapData
bitData = outBit.LockBits(New Rectangle(Point.Empty, _SnapSize), ImageLockMode.WriteOnly, _SnapFormat)
outBit.UnlockBits(bitData)
GC.Collect()
GC.WaitForPendingFinalizers()
End If
End Sub
First really sorry that I completely forgot about this question
The answer is - these structures are for native APIs.
My camera was 2 mega pixel lens and I was getting proper image at resolution of 1600 x 1200
The math is simple
1600 x 1200 = 1920000 Total Pixels
Pixel format is 24 bpp means 3 bytes for each pixel total size of image is 5760000
This lens can no longer produce data more than 2 mb that’s why 1600x1200 is the hardware resolution limit for this camera and hardware is not responsible for interpolation for higher resolution image that’s why I have to do it manually after getting original image from camera.
This is what exactly I did; I capture image of 1600x1200 and write image processing algos to create interpolation and improving quality of that image.
Project was creating a cheap book scanning device for document scanning. Was done successfully and in use by our clients.

Resources