I am trying to connect the USRP_UHD device to the FM_mono_demo waveform. The waveform's component that connects to the USRP_UHD is the TuneFilterDecimate. The USRP_UHD outputs 16-bit integers while the TuneFilterDecimate computers requires float values.
The fix is to add the DataConverter component at the beginning of the FM_mono_demo waveform.
I am going to be experimenting with a different sdr whose 'device' output floats, making the original waveform correct.
Therefore I need to have two versions of the FM_mono_demo waveform, the original and one modified with the DataConverter component.
A better solution would be to launch the DataConverter component, if needed, using python and connect it to the first component of the waveform.
There is a method to launchComponent within the sandbox, but I cannot find a way to do so within a domain.
An idea would be to create two waveforms. One would be the main waveform and the second would consist of components that could be accessed and connected to the main waveform.
This leads to the idea that multiple waveforms could be connected at run-time to allow for dynamic configuration. There is a lot going on with this question. Maybe I overlooked an obvious way to solve my original problem.
Your problem is rather broad, but I think I can point you in the right direction.
It seems as though you're dealing with two issues:
Moving from sandbox to domain
Dealing with multiple devices
For problem 1), I recommend you dig further into the manual, specifically this section: Redhawk Manual: The Runtime Environment.
For problem 2), you can find more information in the manual in this section: Redhawk Manual: Working with Devices. That section includes specifying a particular hardware device and running components with Redhawk devices (proxies to the actual hardware device).
I recommend you start with those steps and post specific questions as you run into issues. You didn't actually ask a question here, but I think your confusion lies within understanding the Redhawk architecture itself.
Related
I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.
I am using REDHAWK 1.9 on CentOS 6.3 32 bit...
I have a REDHAWK component that takes in one data stream. The waveform may want to have more than one instance of the class depending upon the data. Is it possible to do the following:
Create an instance of a component on the fly when the waveform is running?
Create dynamic connections between components when the waveform is running?
Jonathan, I'm not sure I totally understand your question but let me try an answer and you can clarify if I'm misunderstanding. It sounds like you want to have a waveform running, and depending on what the waveform does to the data flowing into it, launch more waveforms to perform other tasks on the data. Is that correct?
Dynamic launching of waveforms based on the meeting of certain conditions is not included natively with REDHAWK. However, it would be possible to create a component to do this and include it in one of your waveforms.
When stringing together multiple waveforms, make sure the connecting ports are configured as external ports.
I'm trying to interface a Nexys3 board with a VmodTFT via a VHDCI connector. I am pretty new to FPGA design, and although I have experience with micro-controllers. I am trying to approach the whole problem as a FSM. However, I've been stuck on this for quite some time now. What signals constitute my power up sequence? When do I start sampling data? I've looked at the relevant datasheets and they don't make things very clearer. Any help would be greatly appreciated (P.S : I use Verilog for the design).
EDIT:
Sorry for the vagueness of my question. Here's specifically what I am looking at.
For starters, I am going to overlook the touch module. I want to look at the whole setup as a FSM. I am assuming the following states:
1. Setup connection or handshake signals
2. Switch on the LCD
3. Receive pixel data
4. Display video
5. Power off the LCD
Would this be a reasonable FSM? My main concerns are with interpreting the signals. Table 5 in the VmodTFT_rm manual shows a list of signals; however, I am having trouble understanding what signals are for what (This is my first time with display modules). I am going to assume everything prefixed with TFT_ is for the display and everything with TP_ is for the touch panel (Please correct me if I'm wrong). So what signals would I be changing in each state and what would act as inputs?
Now what changes should I make to accommodate the touch panel too?
I understand I am probably asking for too much, but I would greatly appreciate a push in the right direction as I am pretty stuck with this for a long time.
Your question could be filled out a little better, it's not clear exactly what's giving you trouble.
I see two relevant docs online (you may have seen these):
Schematic: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_sch.pdf
User Guide: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_rm.pdf
The user guide explains what signals are part of the Power up sequence
you must wait between 0.5ms and 100ms after driving TFT-EN before you can drive DE and the pixel bus
You must wait 0 to 200ms after setting up valid pixel data to enable the display (with DISP)
You must wait 160ms after enabling DISP before you start pulsing LED-EN (PWM controls the backlight)
Admittedly the documentation doesn't look great and some of the signals names are not consistent, but I think you can figure it out from there.
After looking at the user guide to understand what the signals do, look at the schematic to find the mapping between the signal names and the VHDCI pinout. Then when you connect the VHDCI pinout to your FPGA, look at your FPGA's manual to find mapping between pins on the VHDCI connector and balls of the FPGA, and then you can use the fpga's configuration settings to map an FPGA ball to a logical verilog input to your top module.
Hope that clears things up a bit, but please clarify your question about what you don't understand.
I'm currently stuck with a pesky little issue. I developed an application that zeroes out the DXGI mode desc. structure and calls FindClosestMatchingMode() to, as advertised, "gravitate towards the desktop resolution".
This works fine if the laptop(s) run fully on their own display -- as soon as I plug in another monitor it goes berserk. In the case I extend my desktop it will still correctly get the laptop monitor resolution, yet the attached one (running 1080p) will yield a preference for 800*480 :) (sure, poor man's 16:10, but...)
Doing the same thing with the monitors cloned/combined (results in 1 output device), even if their resolution is equal, gives the same 800*480 crap.
What gives? And has anyone perhaps found a way to properly get a display's current mode through DXGI or a pointer for a wholly different yet functional approach to this here problem?
Life was easier back in the D3D9 days =)
-- Update
As it turns out any FindClosestMatchingMode() call made on the IDXGIOutput instance belonging to the external monitor behaves differently (and in most cases plain wrong) compared to the internal display, even though their native resolution is identical. To top it all off, other systems don't have this issue yet I can't get around supporting this particular laptop including it's drivers.
Time for a good old setup dialog.
Not the best solution but as I was constrained to these exact machines I settled for getting the monitor's current resolution through GetSystemMetrics() (SM_CXSCREEN/SM_CYSCREEN), which admittedly only works for the primary monitor but there's other ways, and feeding this resolution to the ModeToMatch structure fed to FindClosestMatchingMode().
It then settles for the correct (desktop) resolution.
Better answers are very welcome of course ;)
How does the open-source/free software community develop drivers for products that offer no documentation?
How do you reverse engineer something?
You observe the input and output, and develop a set of rules or models that describe the operation of the object.
Example:
Let's say you want to develop a USB camera driver. The "black box" is the software driver.
Develop hooks into the OS and/or driver so you can see the inputs and outputs of the driver
Generate typical inputs, and record the outputs
Analyze the outputs and synthesize a model that describes the relationship between the input and output
Test the model - put it in place of the black box driver, and run your tests
If it does everything you need, you're done, if not rinse and repeat
Note that this is just a regular problem solving/scientific process. For instance, weather forecasters do the same thing - they observe the weather, test the current conditions against the model, which predicts what will happen over the next few days, and then compare the model's output to reality. When it doesn't match they go back and adjust the model.
This method is slightly safer (legally) than clean room reverse engineering, where someone actually decompiles the code, or disassembles the product, analyzes it thoroughly, and makes a model based on what they saw. Then the model (AND NOTHING ELSE) is passed to the developers replicating the functionality of the product. The engineer who took the original apart, however, cannot participate because he might bring copyrighted portions of the code/design and inadvertently put them in the new code.
If you never disassemble or decompile the product, though, you should be in legally safe waters - the only problem left is that of patents.
-Adam
Usually by reverse engineering the code. There might be legal issues in some countries, though.
Reverse Engineering
Reverse engineering Windows USB device drivers for the purpose of
creating compatible device drivers for Linux
Nvidia cracks down on third party driver development
This is a pretty vague question, but I would say reverse engineering. How they go about that is dependent on what kind of device it is and what is available for it. In many cases the device may have a similar core chipset to another device that can be modified to work.