Is it possible to increase the dpi of images displayed by the x11() device in R? By default, my plots are very grainy.
Related
I have one desktop and one laptop (windows 10).
The size of the monitor installed on the desktop is 24 inches.
They are same resolution(1920 by 1080) and i set same zoom ratio in Windows 10.
I do not use Movewindow function or other things to fix the size, however, the dialog size in the laptop and the desktop is different.
It causes that the controls in my software are overlapping each other.
please let me know if i notify other settings!!
Thank you.
Sorry, my question is not specific, i add two pictures.
In laptop, the setting of Windows is overlapped with the setting of DICOM.
Of course, i can make the controls not overlapping when i directly use 'movewindow' on dialog, but i want to know why the two dialogs have different size.(same resolution(1920 x 1080), same zoom ratio on windows 10)
In Desktop, the print dialog has 791 x 632 size, in latop, the print dialog has 911 x 816 size.
I will check DPI-Awareness. Thank you for your comments!
I'm getting to know about this phenomenon.
In laptop, The resolution is 1920 X 1080, However, the ratio value which Windows 10 recommend is 125%.
I Change the ratio to 100% then, It worked at a higher resolution i think.
I use this code to get a resoltion.
MONITORINFO mi;
::ZeroMemory(&mi, sizeof MONITORINFO );
mi.cbSize = sizeof MONITORINFO;
if( ::GetMonitorInfo(hMonitor, &mi))
In laptop,
When i use 100% the size of text, apps, and other items in Scale and layout,
then The mi.rcMonitor's width = 2400
if i change the ratio value to 125%(which windows recommend), then The mi.rcMonitor's width = 1920
I don't know how to represent 2400 * 1350 in 1920 * 1080 monitor, so i will study this!
I'm trying to use mogrify to decrease the quality of the image to ultimately decrease the image size but rather than decreasing it, the image size is increasing. I'm using the following command:
mogrify -quality 20% 1.png
The image size is going from 2.5 mb to 4 mb, any idea?
PNG is a lossless format, so changing "quality" settings should do nothing at all with respect to the "image".
The mogrify documentation confirms this - "quality", when applied to a PNG, indicates which row filters to apply: a value ranging from 0 to 6.
Since the input 20 is invalid for a PNG file, it must have been silently replaced with a default value; presumably 0, which indicates no row filtering at all. (If you really want to know if this is the case, you could use a tool such as pngcheck on your before and after images.)
As to your target: it is unclear whether you want to decrease the physical image size in pixels, or the file size on disk, or (possibly) both. For the first, you can use -resize. For the second, try a PNG-recompressing tool such as pngcrush. For both, use the first method and then the second.
Another option may be to lower the number of color components, for example, from 24-bit RGB to indexed color. Finally, you can always convert the image type from PNG to JPEG, after which you can experiment with the "quality" parameter.
I need to create images for a slideshow. The problem is that the images will be displayed in different screens.
I want to know if I can use the same resolution for all of them (1920 x 1080) 72px/inch.
Screens:
1. 24ft x 14ft pitch 12mm - Aspect ration must be 16:9
2. 12ft x 9ft pitch 15mm - Aspect ration must be 16:9
3. 55" TV - Supports full HD (1080 or 720)
4. 42" TV - Supports full HD (1080 or 720)
5. 19" screen - Maximum resolution is 1440 x 900
I don't know much about resolutions and any help will be greatly appreciated.
Thank you.
1920x1080 will be large enough for any television. Full HD (1080p) means 1080 horizontal scanlines (meaning 1080 pixels tall).
The size in inches is irrelevant in this case.
The only reason I could see for going larger than 1920x1080 is if the screen depth is more than 72 DPI (e.g. the iPhone retina display, which is 326 ppi)
Many of the embedded/mobile GPUs are providing access to performance registers called Pixel Write Speed and Texel Write speed. Could you explain how those terms can be interpreted and defined from the actual GPU hardware point of view?
I would assume the difference between pixel and texel is pretty clear for you. Anyway, just to make this answer a little bit more "universal":
A pixel is the fundamental unit of screen space.
A texel, or texture element (also texture pixel) is the fundamental unit of texture space.
Textures are represented by arrays of texels, just as pictures are
represented by arrays of pixels. When texturing a 3D surface (a
process known as texture mapping) the renderer maps texels to
appropriate pixels in the output picture.
BTW, it is more common to use fill rate instead of write speed and you can easily find all required information, since this terminology is quite old and widely-used.
Answering your question
All fill-rate numbers (whatever definition is used) are expressed in
Mpixels/sec or Mtexels/sec.
Well the original idea behind fill-rate was the number of finished
pixels written to the frame buffer. This fits with the definition of
Theoretical Peak fill-rate. So in the good old days it made sense to
express that number in Mpixels.
However with the second generation of 3D accelerators a new feature
was added. This feature allows you to render to an off screen surface
and to use that as a texture in the next frame. So the values written
to the buffer are not necessarily on screen pixels anymore, they might
be texels of a texture. This process allows several cool special
effects, imagine rendering a room, now you store this picture of a
room as a texture. Now you don't show this picture of the room but you
use the picture as a texture for a mirror or even a reflection map.
Another reason to use MTexels is that games are starting to use
several layers of multi-texture effects, this means that a on-screen
pixel is constructed from various sub-pixels that end up being blended
together to form the final pixel. So it makes more sense to express
the fill-rate in terms of these sub-results, and you could refer to
them as texels.
Read the whole article - Fill Rate Explained
Additional details can be found here - Texture Fill Rate
Update
Texture Fill Rate = (# of TMU - texture mapping unit) x (Core Clock)
The number of textured pixels the card can render to the
screen every second.
It is obvious that the card with more TMUs will be faster at processing texture information.
The performance registers/counters Pixel Write Speed and Texel Write speed maintain stats / count operations about pixel and texel processed/written. I will explain the peak (maximum possible) fill rates.
Pixel Rate
A picture element is a physical point in a raster image, smallest
element of display device screen.
Pixel rate is the maximum amount of pixels the GPU could possibly write to the local memory in one second, measured in millions of pixels per second. The actual pixel output rate also depends on quite a few other factors, most notably the memory bandwidth - the lower the memory bandwidth is, the lower the ability to get to the maximum fill rate.
The pixel rate is calculated by multiplying the number of ROPs (Raster Operations Pipelines - aka Render Output Units) by the the core clock speed.
Render Output Units : The pixel pipelines take pixel and texel information and process it, via specific matrix and vector operations, into a final pixel or depth value. The ROPs perform the transactions between the relevant buffers in the local memory.
Importance : Higher the pixel rate, higher is the screen resolution of the GPU.
Texel Rate
A texture element is the fundamental unit of texture space (a tile of
3D object surface).
Texel rate is the maximum number of texture map elements (texels) that can be processed per second. It is measured in millions of texels in one second
This is calculated by multiplying the total number of texture units by the core speed of the chip.
Texture Mapping Units : Textures need to be addressed and filtered. This job is done by TMUs that work in conjunction with pixel and vertex shader units. It is the TMU's job to apply texture operations to pixels.
Importance : Higher the texel rate, faster the game renders displays demanding games fluently.
Example : Not a nVidia fan but here are specs for GTX 680, (could not find much for embedded GPU)
Model Geforce GTX 680
Memory 2048 MB
Core Speed 1006 MHz
Shader Speed 1006 MHz
Memory Speed 1502 MHz (6008 MHz effective)
Unified Shaders 1536
Texture Mapping Units 128
Render Output Units 32
Bandwidth 192256 MB/sec
Texel Rate 128768 Mtexels/sec
Pixel Rate 32192 Mpixels/sec
I am working with a VB window that seems to want to resize depending on the resolution of the monitor.
Right now, I have VS2010 open on a monitor that is 1366 x 768. I have the form set to the following dimensions:
MaximumSize, MinimumSize and Size are all set to 948x580.
When I run the app, it looks fine.
Now, I have another monitor that has the resolution set to 1680x1050.
When I run the app, the window is 1263x705.
I was under the impression, that forcing the Max and Min sizes would lock the size of the window. Is this correct? Is there some other setting I am possibly missing?
take care,
lee
This is just a guess but what is your Form.AutoScaleMode set to? It is defaulting to the AutoScaleMode.Font Enumeration on my system even though the documentation says it defaults to AutoScaleMode.None. You may want to look at this MSDN link on AutoScaling also.
None - Automatic scaling is disabled.
Font - Controls scale relative to the dimensions of the font the classes are using, which is typically the system font.
Dpi - Controls scale relative to the display resolution. Common resolutions are 96 and 120 DPI.
Inherit - Controls scale according to the classes' parent's scaling mode. If there is no parent, automatic scaling is disabled.