I expect xrandr --dpi 180 to change the dpi from 90 to 180 on the fly per-session and thus downscale resolution, but nothing happens. Am I missing something? I was able to change the dpi with the silly hack of adding fonts.fontconfig.dpi=180 to /etc/nixos/configuration.nix then nixos-rebuild test, which is less than ideal.
To be very clear, here's my situation:
1. the default resolution is too low for my laptop screen and everything (fonts) looks painfully small.
2. I learned adding fonts.fontconfig.dpi=180 to /etc/nixos/configuration.nix on nixos can set the dpi to 180 from the default 96 so that everything has normal size (fonts).
3. However, I now sometimes open the vm on a monitor which has lower resolution than my laptop, and I'd like to change the dpi on the fly when switching between using the laptop screen and the monitor.
After changing the DPI via that setting, programs won't necessarily immediately react to it.
You may need to re-open certain applications to see the change.
Related
We build our own Yocto environment and distribution using a 5.10.119 kernel and mesa 20.3.
Currently we are trying to get an MIPI-DSI (ILI9881C) screen up and running on an SOM-RK3399v2 (Friendly-Elec) but have some troubles: We can get the screen to display an image, however, its shifted roughly 100px. This shift depends highly on the used MIPI-Clock (mbps).
All timings and clocks are correct and triple checked with the screen-vendor. We tried many configurations, did several Hardware revisions to our Mainboard and even tested the SOM-RK3399-Eval board from Friendly-Elec. All show the same behavior.
In an accident, we found an actual working configuration for the screen. However, on a mathematical basis, these settings should never work, but they do!
The screen vendor supplied us with the following timings:
H 800
HSW 20
HBP 20
HFP 20
V 1280
VSW 10
VBP 20
VFP 10
PCLK 68112 (60fps)
The driver implementation for the RK3399 MIPI-DSI selects mbps=510 for these timings.
But using these values results in the shifted image:
Notice how y100 is directly at the top of the screen, rather than 100px from the bottom.
Several try and errors later we found a configuration that works for the screen but shouldn't:
H 800
HSW 33
HBP 500
HFP 500
V 1280
VSW 10
VBP 20
VFP 10
PCLK 145173 (60fps)
MBPS 457
As you can see, those timings are ridiculously off the charts and PCKL does not fit to the (hardcoded) MBPS of 457. However, the screen shows a correct and nicely aligned image without flickering whatsoever.
We further diagnosed this and found that the RK3399 sends some strange or even malformed MIPI-DSI-Data-Stream when going from LP(LS) to HS to the screen using the correct timings, but sends a perfectly fine LS2HS-Data-Stream to the screen with incorrect timings:
Observe how the signal for the correct timings is hold high way to long (times 3) and the switch from low-power to high-speed seems to be corrupted.
Therefore, we assume that the shift in our image might be related to that incorrect LP2HS.
Have you seen such behavior before? Do you know what could yield this behavior? It seems like it does 3 blankings instead of 1?
Hi crowd intelligence,
I'm trying to configure my X11 server on a ubuntu 18.04 server in a way, so I can have a larger virtual screen than my connected output without having the output in panning mode. My system is running with an nvidia gpu.
Perfect would be, if I could achieve this programmatically.
Looking into the RandR protocol, I found that there is a method XRRSetPanning which should be able to disable panning for a certain CRTC when the x and y value are set to zero. This should be identical to what is executed when I call xrandr --output $MyOutput --panning 0x0.
The issue is, that when I do this, I get a BadValue error.
When trying to call xrandr --fb 3000x4000 --output $MyOutput --panning 1920x1080 and the output is 1920x1080 as well, I also get a BadValue. When calling xrandr --fb 3000x4000 --output $MyOutput --panning 1920x1200 the screen size gets adopted to 1920x1200 as well, which is what I don't want. I would have expected that in all cases, the screen stays 3000x4000 and that the panning is set correctly without errors...
Interesting enough, I saw this working on another machine before, so I think there is a way around it.
Having digged a bit deeper, I found, that this seems to be a feature of the nvidia driver called TwinView I suppose.
I think, that I have to disable this feature in order to achieve my intentions... The question is how... And this is where I hope that you can help me.
I'm working on a fullscreen Windows desktop application that's moderately graphics-intensive, it uses OpenGL but only renders 2D content. Nothing fancy, mostly pushing pixels to the screen (up to 4K, single monitor) and uploading textures. We're using VSync to control the rendering framerate, ie. calling SwapBuffers() at the end of rendering to block until the next VBlank.
The main requirement we have is that the app runs at a solid 60FPS as it's used with a touchscreen, and interactions need to be as fluid as possible.
Because it's pretty basic, the app runs just fine on a 8th gen Intel i7 CPU with integrated Intel HD Graphics 630 GPU. Neither the CPU or GPU are anywhere near peak usage, and we can see that we're hitting a comfortable 60FPS through our in-app FPS meter. I also have it running with similar results on my Surface Book 2 with Intel i7 and integrated Intel UHD Graphics 620 GPU.
However, what I've recently started noticing is that the app sometimes starts dropping to 30FPS, then staying there either for long periods of time or sometimes even permanently. Through our FPS meter, I can tell that we're not actually spending any time rendering, it's just our SwapBuffers() call that blocks arbitrarily for 2 frames, capping us at 30FPS. The only way to get back to 60FPS is to alt-tab with another app and back to ours, or simply bringing the Windows menu up then going back to the app.
Because of the app going back to 60FPS afterwards, I'm positive that this is an intended behavior of the Intel driver, probably meant for gaming (gamers prefer a stable 30FPS rather than irregular/occasional dropped frames which make the game look choppy).
However in our case, dropping an occasional frame isn't a big deal, however being capped at 30FPS makes our UI and interactions far less pleasing to the eye, especially when it could easily render at a smooth 60FPS instead.
Is there any way to switch the driver behavior to prefer pushing 60FPS with occasional drops rather than capping at 30FPS?
OK so I was able to figure this out with a little bit of tweaking and reverse-engineering: The answer is that yes this is an intended but unfortunate default behavior of the Intel driver, and it can be fixed via the Intel HD Graphics Control Panel app if available, or directly in the registry otherwise (which is the only way to fix the issue on the Surface Book and other Surface devices, where the custom Intel driver doesn't expose the Intel HD Graphics Control Panel app anymore).
Starting with the simple solution: In the Intel HD Graphics Control Panel app, go to "3D", then "Application Settings". You'll first need to create an application profile, by selecting the file on disk for the process that creates the OpenGL window. Once that's done, the setting you want to adjust is "Vertical Sync". By default, "Use Application Default Settings" is selected. This is the setting that causes the capping at 30FPS. Select "Use Driver Settings" instead to disable that behavior and always target 60FPS:
This should've been pretty obvious, if it wasn't for Intel's horrible choice of terms and incomprehensible documentation. To me it looks like the choices for the settings are inverted: I would expect the capping to happen when I select "Use Driver Settings", which then implies the driver is free to adjust buffer swapping as it sees fit. Similarly, "Use Application Default Settings" implies that the app decides when to push frames, which is precisely the opposite of what the setting does. Even the little help bubbles in the app seem to contradict what these settings do...
ps: I'll post the registry-based solution in a separate answer to keep it short
Here is the registry-based answer, if your driver does not expose the Intel HD control panel (such as the driver used on the Surface Book and possibly other Surface laptops), or if you want to make that fix programmatically via regedit.exe or the Win32 API:
The application profiles created by the Intel HD control panel are saved in the registry under HKCU\Software\Intel\Display\igfxcui\3D using a key with the process file name (e.g. my_game.exe) and a REG_BINARY value with a 536-byte data blob divided like this:
Byte 0-3: Anisotropic Filtering (0 = use app default, 2-10 = multiplier setting)
Byte 4-7: Vertical Sync (0 = use app default, 1 = use driver setting)
Byte 8-11: MSAA (0 = use app default, 1 = force off)
Byte 12-15: CMAA (0 = use app default, 1 = override, 2 = enhance)
Byte 16-535: Application Display Name (wide-chars, for use in the control panel's application list)
Note: all values are stored in little-endian
In addition, you need to make sure that the Global value under the same key has its first byte set to 1, which is a sort of global toggle.(the control panel sets it to 1 when one or more entries are added to the applications list, then back to zero when the last entry is deleted from the list).
The format of value is also a REG_BINARY value with 8 bytes encoded like this:
Byte 0-3: Global toggle for application entries (0 = no entries, 1 = entries)
Byte 4-7: Application Optimal mode (0 = enabled, 1 = disabled)
For example:
I just picked up the Lenovo Thinkpad Yoga 15 (20DQ001KUS) for $799. Here is a link: http://shop.lenovo.com/us/en/laptops/thinkpad/yoga-series/yoga-15/#tab-tech_specs
I'd really like to get the tablet features working in Linux Mint 17.2. I have been doing some googling and found some old scripts below:
https://github.com/admiralakber/thinkpad-yoga-scripts
https://launchpad.net/magick-rotation
With their help, I was able to put together a python script that polls my accelerometer values (found in /sys/bus/iio/devices/iio:device*) to figure out when the screen should be rotated. The difficult part seems to be detecting when the device should be in laptop or tablet mode. If I bend the lid past 180 degrees, or if I tilt the device on its side so that it is perpendicular to the ground, there is a light on my keyboard that shuts off and the keyboard itself is completely disabled. This appears to be a bios feature and indicates that there is definitely a sensor. This would be ideal, and it would solve all of my current problems.
Unfortunately, I have been unable to find any sensor readout in linux that would tell me when the computer should be in tablet mode. I need this to determine when autorotate should be turned on, when the touchpad/trackpad/keyboard should be disabled, and when I should launch or kill the onscreen keyboard app.
In the meantime, I'm using the lid's incline sensor (also found in /sys/bus/iio/devices/iio:device*). This works mostly, as It can detect tent mode and tablet mode. Unfortunately, the sensor is aligned to gravity, so it cannot tell the difference between laptop mode and stand mode (because the lid/screen is in the same orientation with respect to gravity).
The older scripts suggest that there is a special keycode in older models that is triggered when the mode changes from laptop to tablet and vice-versa. Unfortunately, I am not seeing any such keycode/scancode being thrown when I monitor xev/evtest.
In short, I'd like to be able to programmatically determine when the laptop screen/lid is opened more than 180 degrees. Can anyone help me with this?
My current script is linked below, if anyone else wants to see how I'm doing it (or if anyone else wants to make suggestions), I've only spent a day on it, so it's nowhere near finished. I'll probably publish it for GNU release at some later point in time for anyone else needing the same functionality.
https://gist.github.com/anonymous/5d2c2d2967eac8774b69
P.S. As an aside, I'm still trying to get full functionality from the ALPS touchpad. Using some psmouse DKMS installers, I have been able to get multi-touch capabilities like two-finger scroll and two/three finger tap working. However, the 3 physical buttons don't do anything (even in xev/evtest). Passing proto=imps to the psmouse module causes them to work , but breaks multi-touch functionality. If anyone would like to offer advice for that, I'd be grateful as well.
I am working with a VB window that seems to want to resize depending on the resolution of the monitor.
Right now, I have VS2010 open on a monitor that is 1366 x 768. I have the form set to the following dimensions:
MaximumSize, MinimumSize and Size are all set to 948x580.
When I run the app, it looks fine.
Now, I have another monitor that has the resolution set to 1680x1050.
When I run the app, the window is 1263x705.
I was under the impression, that forcing the Max and Min sizes would lock the size of the window. Is this correct? Is there some other setting I am possibly missing?
take care,
lee
This is just a guess but what is your Form.AutoScaleMode set to? It is defaulting to the AutoScaleMode.Font Enumeration on my system even though the documentation says it defaults to AutoScaleMode.None. You may want to look at this MSDN link on AutoScaling also.
None - Automatic scaling is disabled.
Font - Controls scale relative to the dimensions of the font the classes are using, which is typically the system font.
Dpi - Controls scale relative to the display resolution. Common resolutions are 96 and 120 DPI.
Inherit - Controls scale according to the classes' parent's scaling mode. If there is no parent, automatic scaling is disabled.