Capturing high-quality(300dpi) screenshots of QT-based app in Linux - linux

I need to make a screenshot of my form created in QT designer. There are numerous approaches to do screenshots(gimp, import, etc..) but alt of them deal with same dpi as on my monitor(about 100dpi). This is quite enough to publish on web site, but 300dpi images are required for paper publications. Are there any ways to create 300dpi screenshots?

I don't think that the 300dpi requirement for publication applies to things like screenshots, where the data is inherently pixelated. It's meant for things like graphs that can and should be generated in a vector format.
Just get the best results you can, and only use screenshots for things that are absolutely necessary, and not, for example, commandline I/O or results graphs.
If the final images are being shown smoothed and blurry, either find settings in your PDF creator to prevent this, or manually blow up the image to a multiple of its original size to preserve the original sharp pixelation.

Painting can be done on any QPaintDevice, which includes QPrinter. If you wanted to, you could set up painting redirection to a given device, then have the widget repaint itself. This might give you the higher precision you desire. For more information, look on Qt's website for the Paint System overview, and also maybe look at the QPixmap::grabWidget functions.

You can not grab screenshot in a best resolution than the one of your monitor. DPI has no sense in computer display. Some software convert pixel per point (ppp) to dot per inch (dpi) for paper publication.
Once you have made your screenshots, you can convert them to 300 dpi using a software like photoshop or equivalent.

You can't have more pixels on your screenshot than your widget displays.
For a given widget size (say 900x900px) you can have your image printed at 300dpi, but it will only make a 3 inch square on your paper.

You can force your screen to behave as a 4K display with the command:
xrandr --output eDP1 --rate 40.01 --mode 1366x768 --fb 4096x3072 --panning 4096x3072
remmember to fit the rate and the mode fields as stated from your default xrandr configuration. You can see that with xrandr
and then acquire the screenshot with
import -window root imagefile.png

Related

Webplayer GUI Rndering issue pixelated

I made a game in unity3d, Its graphics looking perfect in unity engine. but when i built it and played in web-player its graphics become pixelated and blurry.
So how can i make it pixel perfect game for web-player?
This also happened to me once. But i got the answer after some searching on unity.
this is what you need to do.
Select the texture which becomes pixelated.
-From import settings
Texture type=texture
Filter mode=Trilinear
Slect web as a platform and check yes on override for web
Max size=max
Format=truecolor
and click apply this should and definetly help me.
Source
Try to change you image size in photoshop i think you save it in small size.
And always make you graphics in vector so it become pixel free and you always have achance to make a new image from vector.

Automatic re-size of a webpage to fit monitor/resolution

I'm in the midst of designing a webpage, but doing it on a 22" monitor with a resolution of 1680x1050. In our work environment we are using a mix of 17" and 22" monitors and of course they are all at different resolutions. How would I need to code my page to able to automatically adjust to the size of the monitor? I'm hoping it can be done without objects being moved around, just scaling down everything to fit.
Some of the tips are,
Use % while defining the div size instead of pixels
2.Use em in font-size instead of 12px,14px ...
fit the screen even for small screens- worth reading about this deeply . http://en.wikipedia.org/wiki/Responsive_Web_Design

How to make colours on one screen look the same as another

Given two seperate computers, how could one ensure that colours are being projected roughly the same on each screen?
IE, one screen might have 50% brightness more than another, so colours appear duller on one screen. One artist on one computer might be seeing the pictures differently to another, it's important they are seeing the same levels.
Is there some sort of callibration technique via software you can do? Any techniques? Or is a hardware solution the only way?
If you are talking about lab-critical calibration (that is, the colours on one monitor need to exactly match the colours on another, and both need to match an external reference as closely as possible) then a hardware colorimeter (with its own appropriate software and test targets) is the only solution. Software solutions can only get you so far.
The technique you described is a common software-only solution, but it's only for setting the gamma curves on a single device. There is no control over the absolute brightness and contrast; you are merely ensuring that solid colours match their dithered equivalents. That's usually done after setting the brightness and contrast so that black is as black as it can be and white is as white as it can be, but you can still distinguish not-quite-black from black and not-quite-white from white. Each monitor, then, will be optimized for its own maximum colour gamut, but it will not necessarily match any other monitor in the shop (even monitors that are the same make and model will show some variation due to manufacturing tolerances and age/use). A hardware colorimeter will (usually) generate a custom colour profile for the device under test as it is at the time of testing, and there is generally and end-to-end solution built into the product (so your scanner, printer, and monitor are all as closely matched as they can be).
You will never get to an absolute end-to-end match in a complete system, but hardware will get you as close as you can get. Software alone can only get you to a local maximum for the device it's calibrating, independent of any other device.
What you need to investigate are color profiles.
Wikipedia has some good articles on this:
https://en.wikipedia.org/wiki/Color_management
https://en.wikipedia.org/wiki/ICC_profile
The basic thing you need is the color profile of the display on which the color was seen. Then, with the color profile of display #2, you can take the original color and convert it into a color that will look as close as possible (depends on what colors the display device can actually represent).
Color profiles are platform independent and many modern frameworks support them directly.
You may be interested in reading about how Apple has dealt with this issue:
Color Programming Topics
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DrawColor/DrawColor.html
You'd have to allow or ask the individual users to calibrate their monitors. But there's enough variation across monitors - particularly between models and brands - that trying to implement a "silver bullet" solution is basically impossible.
As #Matt Ball observes calibrating your monitors is what you are trying to do. Here's one way to do it without specialised hardware or software. For 'roughly the same' visual calibration against a reference image is likely to be adequate.
Getting multiple monitors of varying quality/brand/capabilities to render a given image the same way is simply not possible.
IF you have complete control over the monitor, video card, calibration hardware/software, and lighting used then you have a shot. But that's only if you are in complete control of the desktop and the environment.
Assuming you are just accounting for LCDs, they are built different types of panels with a host of different capabilities. Brightness is just one factor (albeit a big one). Another is simply the number of colors they are capable of rendering.
Beyond that, there is the environment that the monitor is in. Even assuming the same brand monitor and calibration points, a person will perceive a different color if an overhead fluorescent is used versus an incandescent placed next to the monitor itself. At one place I was at we had to shut off all the overheads and provide exact lamp placement for the graphic artists. Picky picky. ;)
I assume that you have no control over the hardware used, each user has a different brand and model monitor.
You have also no control over operating system color profiles.
An extravagant solution would be to display a test picture or pattern, and ask your users to take a picture of it using their mobile or webcam.
Download the picture to the computer, and check whether its levels are valid or too out of range.
This will also ensure ambient light at the office is appropiate.

Advanced Text Rendering with Direct3D

Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.

Colour blindness simulator

Like any responsible developer, I'd like to make sure that the sites I produce are accessible to the widest possible audience, and that includes the significant fraction of the population with some form of colour blindness.
There are many websites which offer to filter a URL you feed it, either by rendering a picture or by filtering all content. However, both approaches seem to fail when rendering even moderately complex layouts, so I'd be interested in finding a client-side approach.
The ideal solution would be a system filter over the whole screen that can be used to test any program. The next best thing would be a browser plugin.
I came across Color Oracle and thought it might help. Here is the short description:
Color Oracle is a colorblindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Color Oracle is great, but another option is KMag, which is part of KDE in Linux. It's ostensibly a screen magnifier, but can simulate protanopia, deuteranopia, tritanopia and achromatopsia.
It differs from Color Oracle by requiring an additional window in which to display the re-coloured image, but an advantage is that one can modify the underlying image at the same time as previewing the simulation.
Here is a screenshot showing the original figure on the left, and the KMag window on the right, simulating protanopia.
Here's a link to a website that simulates various kinds of color blindness:
http://www.vischeck.com/
They let you check URL's and Screenshots with three kinds of different color blindness types (URL checking is a bit dated though. Image-check works better).
I'd encourage everyone to check their applications btw. Seeing your own app with others eyes may be an eye opener (pun intended).
I know this is a quite old question, but I've recently found an interesting solution to transparently simulate color blindness.
When working with Linux, you can simulate color blindness using the Color Filter plugin for Compiz. It comes with profiles for deuteranopia and protonopia und changes the colors of the whole screen in real-time.
It's very nice because it works transparently in all applications (even within Youtube-Videos), but it will only work where Compiz is available, e.g. only under Linux.
Here's an article that has some guidelines for optimizing UI for color blind users:
Particletree ยป Be Kind to the Color Blind
It contains a link to another article with the kind of tools you were asking for:
10 colour contrast checking tools to improve the accessibility of your design | 456 Berea Street
A great paper that explains a conversion that preserves color differences is:
Detail Preserving Reproduction of color images for Monochromats and Dichromats.(PDF)
I haven't implemented the filter, but I plan to when I have some more free time.
I found Colour Simulations easy to use on Windows 10. This software can apply a color-blind filter to a part of the screen or the whole screen. And what's great is it allows me to interact with my PC normally as if it doesn't exist in fullscreen mode. It runs quite slow in my 4K screen using an integrated graphics card, though.

Resources