VK_IMAGE_USAGE_TRANSFER_SRC_BIT flag magically added to swapchain images - graphics

Is the VkImageUsageFlagBits::VK_IMAGE_USAGE_TRANSFER_SRC_BIT supposed to automatically be set for swapchain images?
I've been looking it up, and I don'd see any indication that this flag is supposed to be automatically added to swapchain images.
However, I have tested that in SDK 1.2.148.1 (haven't tried others), on NVIDIA GTX 1080ti with the latest drivers, that flag is automatically added, even when not requested. I'm not sure whether it's the SDK or the driver that it's setting it, but one of them is.
I first verified it in my own code, then tried it using the Sascha Willem's Triangle example, by commenting out the following line:
https://github.com/SaschaWillems/Vulkan/blob/master/base/VulkanSwapChain.cpp#L347
This is an image with the Triangle app running on NVIDIA Nsight. It shouldn't have the flag:
enter image description here

Vulkan has no mechanism to tell you what the usage flags are on an image, whether swapchain or not. As such, you are relying on layers and debugging tools to feed that information to you.
However, these tools themselves are perfectly capable of setting these flags. If a debugging tool wants to be able to show you the image data in a swapchain image, then that image must be usable as the source for a transfer operation. So such a tool must set those flags. The same goes for any layers that might be involved in debugging.
Indeed, the fact that the debugging tool knows that the flag has been set is evidence that the Vulkan implementation didn't set it. If it had set the flag, nobody outside of the implementation would know about it.
So in all likelihood, this is the act of a debugging layer. Unfortunately, it is setting this flag before the validation layer sees it. And to validate uses of the imageless framebuffer feature, the validation layer needs to look at those flags.
To fix this, you may be able to reorder the layers you have activated. ppEnabledLayerNames is an ordered list of layers; if you aren't doing so already, put the validation layers first in this list. If you're enabling layers more globally, see if you can play around with the order of those layers.

You can set imageUsage bit in VkSwapchainCreateInfoKHR

Related

Extent of support for using Vulkan Swapchain Images as Transfer Destination

In my Vulkan backend implementation I currently check the supported usage flags for the Swapchain and then proceed to either use copy commands or a fall-back render pass to draw to the back buffer from an intermediary Render Target. I wanted to know whether this check is required or is it safe to assume that the Swapchain Images allow usage as a Transfer Destination on typical Desktop hardware.
Also, if anyone knows about Vulkan implementations that do not allow Copying to Swapchain Images, I'd appreciate it if you could share. This is mostly for the sake of curiosity rather than solving a problem.
You can look at the Vulkan Hardware Database.
I couldn't find anywhere which summarized the data, but if you click on a device from the list then click onto the surface tab, then on the surface properties tab you can see supportedUsageFlags in the table and look for TRANSFER_DST_BIT.
I only looked at a few and they all had TRANSFER_DST_BIT present. I believe the database and code for the viewer are open source, so perhaps you can find a better way to mine the particular information you're after.

In Vulkan how can you associate each individual video card with monitors they're directly connected to

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

DXGI: trying to get correct display mode from output (monitor)

I'm currently stuck with a pesky little issue. I developed an application that zeroes out the DXGI mode desc. structure and calls FindClosestMatchingMode() to, as advertised, "gravitate towards the desktop resolution".
This works fine if the laptop(s) run fully on their own display -- as soon as I plug in another monitor it goes berserk. In the case I extend my desktop it will still correctly get the laptop monitor resolution, yet the attached one (running 1080p) will yield a preference for 800*480 :) (sure, poor man's 16:10, but...)
Doing the same thing with the monitors cloned/combined (results in 1 output device), even if their resolution is equal, gives the same 800*480 crap.
What gives? And has anyone perhaps found a way to properly get a display's current mode through DXGI or a pointer for a wholly different yet functional approach to this here problem?
Life was easier back in the D3D9 days =)
-- Update
As it turns out any FindClosestMatchingMode() call made on the IDXGIOutput instance belonging to the external monitor behaves differently (and in most cases plain wrong) compared to the internal display, even though their native resolution is identical. To top it all off, other systems don't have this issue yet I can't get around supporting this particular laptop including it's drivers.
Time for a good old setup dialog.
Not the best solution but as I was constrained to these exact machines I settled for getting the monitor's current resolution through GetSystemMetrics() (SM_CXSCREEN/SM_CYSCREEN), which admittedly only works for the primary monitor but there's other ways, and feeding this resolution to the ModeToMatch structure fed to FindClosestMatchingMode().
It then settles for the correct (desktop) resolution.
Better answers are very welcome of course ;)

Framebuffer Documentation

Is there any documentation on how to write software that uses the framebuffer device in Linux? I've seen a couple simple examples that basically say: "open it, mmap it, write pixels to mapped area." But no comprehensive documentation on how to use the different IOCTLS for it anything. I've seen references to "panning" and other capabilities but "googling it" gives way too many hits of useless information.
Edit:
Is the only documentation from a programming standpoint, not a "User's howto configure your system to use the fb," documentation the code?
You could have a look at fbi's source code, an image viewer which uses the linux framebuffer. You can get it here : http://linux.bytesex.org/fbida/
-- It appears there might not be too many options possible to programming with the fb from user space on a desktop beyond what you mentioned. This might be one reason why some of the docs are so old. Look at this howto for device driver writers and which is referenced from some official linux docs: www.linux-fbdev.org [slash] HOWTO [slash] index.html . It does not reference too many interfaces.. although looking at the linux source tree does offer larger code examples.
-- opentom.org [slash] Hardware_Framebuffer is not for a desktop environment. It reinforces the main methodology, but it does seem to avoid explaining all the ingredients necessary to doing the "fast" double buffer switching it mentions. Another one for a different device and which leaves some key buffering details out is wiki.gp2x.org [slash] wiki [slash] Writing_to_the_framebuffer_device , although it does at least suggest you might be able use fb1 and fb0 to engage double buffering (on this device.. though for desktop, fb1 may not be possible or it may access different hardware), that using volatile keyword might be appropriate, and that we should pay attention to the vsync.
-- asm.sourceforge.net [slash] articles [slash] fb.html assembly language routines that also appear (?) to just do the basics of querying, opening, setting a few basics, mmap, drawing pixel values to storage, and copying over to the fb memory (making sure to use a short stosb loop, I suppose, rather than some longer approach).
-- Beware of 16 bpp comments when googling Linux frame buffer: I used fbgrab and fb2png during an X session to no avail. These each rendered an image that suggested a snapshot of my desktop screen as if the picture of the desktop had been taken using a very bad camera, underwater, and then overexposed in a dark room. The image was completely broken in color, size, and missing much detail (dotted all over with pixel colors that didn't belong). It seems that /proc /sys on the computer I used (new kernel with at most minor modifications.. from a PCLOS derivative) claim that fb0 uses 16 bpp, and most things I googled stated something along those lines, but experiments lead me to a very different conclusion. Besides the results of these two failures from standard frame buffer grab utilities (for the versions held by this distro) that may have assumed 16 bits, I had a different successful test result treating frame buffer pixel data as 32 bits. I created a file from data pulled in via cat /dev/fb0. The file's size ended up being 1920000. I then wrote a small C program to try and manipulate that data (under the assumption it was pixel data in some encoding or other). I nailed it eventually, and the pixel format matched exactly what I had gotten from X when queried (TrueColor RGB 8 bits, no alpha but padded to 32 bits). Notice another clue: my screen resolution of 800x600 times 4 bytes gives 1920000 exactly. The 16 bit approaches I tried initially all produced a similar broken image to fbgrap, so it's not like if I may not have been looking at the right data. [Let me know if you want the code I used to test the data. Basically I just read in the entire fb0 dump and then spit it back out to file, after adding a header "P6\n800 600\n255\n" that creates the suitable ppm file, and while looping over all the pixels manipulating their order or expanding them,.. with the end successful result for me being to drop every 4th byte and switch the first and third in every 4 byte unit. In short, I turned the apparent BGRA fb0 dump into a ppm RGB file. ppm can be viewed with many pic viewers on Linux.]
-- You may want to reconsider the reasons for wanting to program using fb0 (this might also account for why few examples exist). You may not achieve any worthwhile performance gains over X (this was my, if limited, experience) while giving up benefits of using X. This reason might also account for why few code examples exist.
-- Note that DirectFB is not fb. DirectFB has of late gotten more love than the older fb, as it is more focused on the sexier 3d hw accel. If you want to render to a desktop screen as fast as possible without leveraging 3d hardware accel (or even 2d hw accel), then fb might be fine but won't give you anything much that X doesn't give you. X apparently uses fb, and the overhead is likely negligible compared to other costs your program will likely have (don't call X in any tight loop, but instead at the end once you have set up all the pixels for the frame). On the other hand, it can be neat to play around with fb as covered in this comment: Paint Pixels to Screen via Linux FrameBuffer
Check for MPlayer sources.
Under the /libvo directory there are a lot of Video Output plugins used by Mplayer to display multimedia. There you can find the fbdev (vo_fbdev* sources) plugin which uses the Linux frame buffer.
There are a lot of ioctl calls, with the following codes:
FBIOGET_VSCREENINFO
FBIOPUT_VSCREENINFO
FBIOGET_FSCREENINFO
FBIOGETCMAP
FBIOPUTCMAP
FBIOPAN_DISPLAY
It's not like a good documentation, but this is surely a good application implementation.
Look at source code of any of: fbxat,fbida, fbterm, fbtv, directFB library, libxineliboutput-fbe, ppmtofb, xserver-fbdev all are debian packages apps. Just apt-get source from debian libraries. there are many others...
hint: search for framebuffer in package description using your favorite package manager.
ok, even if reading the code is sometimes called "Guru documentation" it can be a bit too much to actually do it.
The source to any splash screen (i.e. during booting) should give you a good start.

Capturing print output as vector format (PDF,SVG,EMF,etc.)

BACKGROUND
I am using a commercial application on windows that creates a drawing
This application allows only two output options: (1) save as a bitmap file and (2) print to a printer
the bitmap is useless for my purposes - I want the vectors
Looking at the print output (I sent to the Windows XPS print driver) it seems clear based on the amount of zooming I can do without loss of detail that the underlying vectors are being send to the print driver
Once I get the vectors, I will be writing some code to transform them for some other use.
MY QUESTION
Whart are my options for geting the vectors from the print? (am open to both commercial and open source)
OPTIONS I HAVE THOUGHT OF SO FAR
Take the bitmap and use a program like VectorMagick to. I have tried this approach. It does not produce the fidelity I seek even when the original bitmap is large. Practically speaking I believe that using any tracing approach will not give me the quality vectors I need.
Print to the Adobe PDF driver. This technically works. I have Adobe CS4 so I can print to it save the resulting PDF and then import the PDF into Illustrator and then export as some other vector format. The problem with this approach is money/licensing. I own a personal copy of Adobe CS4 - so this is fine for me. But I need to capture the vectors at work for business purposes - and no I'm not going to install my personal copy of CS4 at work.
Is there a "print driver" that captures the print output directly into a vector format? I have seen some commercial ones via google. If you've used them, I would like to hear about your experience with this technique. I could write my own and in that case do you have links to any existing code that I can start with.
If this is an ongoing solution you need then you might need to buy something or build your own. If it's a onetime affair you might look to use an 'older' Lexmark PCL printer driver. I'd recommend something like the T610. If you download the PCL driver and install it you can modify the defaults and change the Graphics option from XL or Autoselect to GL/2. This will force the driver to output GL/2 output which is vector (GL/2 is a plotter language). This might do the trick for you. Other printer drivers may have the abiltiy to force GL/2 (vs. Raster) but I'm not sure. I use to work for Lexmark and have used this before for a similar requirement.
Ensure you use the Lexmark 'Custom' driver as I don't think the Microsoft-based one support this feature.
...pausing while I investigate a few things............I'm back...
Another option is to find another GL/2 driver or build you own...I just took a few minutes to search the web and came up with a few other options that might work.
Build you own:
I've built drivers (minidrivers) using the Windows Driver Development Kit (DDK), it's quite simple to construct basic drivers. Looks like there is a setting you can set to enable GL/2 output: Enabling HP-GL/2 Vector Graphics Support (PCL-5e) in the GPD
Alternate drivers:
Depending on the OS you are on there is probably a 'generic' GL/2 driver built in. I believe XP has a Hewlett-Packard HP-GL/2 Plotter. You might need to check the license (as with the Lexmark solution) but it might work for you and as it's part of the OS there shouldn't be concern about using it. It's probably written and copyrighted to Microsoft
Keep in mind you will have to do some work to convert GL/2 to whatever output you want but it should be a matter of an simple translator to convert each set of commands. There may be tools out there to help. Here is a quick link to Lexmark GL/2 reference which might be enough to get you going, check out the GL/2 information under the PCL section: Lexmark Technical Reference Guide
Postscript:
The last option I have is to use a generic Postscript driver. Postscript should output the vector images as vector graphics in the Postscript but my knowledge of this is limited at best.
Output:
If you need the output to route to file you can set the port to FILE: which requries user intervention, or install something like Redmon (or connect with me and I'll send you our port monitor that allows for automatic output to file).
Hope this helps in some way.
My favorite is the open source (GPL) PDFCreator
http://sourceforge.net/projects/emfprinter/

Resources