waveform diagrams in RedHawk 2.0 after launch - redhawksdr

RedHawk 2.0 has a nicely improved waveform diagram editor and display. I have a waveform with 14 components and I can carefully lay them out to see all components and paths. But when the waveform is launched RedHawk brings up a new diagram of the waveform and all 14 components are on top of each other in the upper left of the window, which is of little use. I can rearrange the diagram but would have to do this every time it launches. Is there a way to get them to be displayed as they were saved when the diagram was edited?
And if I try to print it, I just get a blank page.

RedHawk v2.0 displays a waveform diagram when the waveform launche, which it colors to show status. It does not use the diagram layout edited when the waveform is defined, but rather tries to layout the diagram using the components and connection lines. It does a fairly nice job as long as the connections are all one direction with arrows pointing left to right, but if there are components that also have right to left connections, it gets confused and puts all the components on top of one another in the upper-left corner. So waveforms that have connections in both directions (eg, bidirectional traffic, such as for a transceiver) become unreadable.

Related

Camera starts at floor level 2019.2 + MRTK v2.0.0

For some reason everytime I load the scene on my Hololens 1 the camera starts at floor level and the scene content doesn't appear to anchor to the floor in the real world..
Using the MRTK demo project files I've created a scene and added in the MRTK packages, configured spatial perception in the player settings, turned on the spatial awareness settings, set to room scale, added observers (checked they're working), added the spatial collider and renderer to an empty game object, and scratched my head many times..
Anyone know what I'm missing/doing wrong?
Unity editor screenshot:
On HoloLens (1st gen and 2) the origin of the world is at the head. From the image you posted, it appears you are designing a VR style scene. In VR, the origin is generally the floor.
As #Perazim mentions, for VR style scenes on HoloLens, you will want to adjust the content placement to account for the origin being at the head. In the Mixed Reality Toolkit example scenes (ex: Demos\HandTracking\Scenes\HandInteractionExamples) the content is contained within a SceneContent object to facilitate easy adjustment.
The older, HoloToolkit contained a script that may come in handy in your scenario. While we have not yet ported it to MRTK v2.x, it should be reasonably straight forward to update.
Hope this helps!

rotate the image rendered by pbrt

I have used pbrt to render my scene. I have specified the viewing angle in the scene file and on rendering it with pbrt I see the image from that specific viewing angle. I want to know if there exists a way by which I can rotate the scene rendered by pbrt using my mouse in real time
No.
To see if it is even possible, render a scene and time how ling it takes. In order to get it real-time you will need pbrt to render at least a few frames a second, preferably 60!
I don't think this is going to happen in 2016.
Alternatively you will need something like an OpenGL representation to perform the real-time interaction and then the rendered scene can only be displayed over the top (once the rendering has been finished). the frustums need to match in order for you to do this otherwise what the user interacts with will not be the same as what they see rendered.
If your editing the scene file, it sounds like your not in coding land and so the only possibility is to write some program that can display the scene (in GL) and update the scene file information to be the same as the current camera and render using pbrt. Its all going to take a long time (pbrt needs to parse the file each time, and re-buffer all the geometry) since supplying the file means pbrt won't save anything from the previous state and so will have to construct acceleration structures etc as well as rendering the scene. Each frame!
Even in code pbrt is not going to give you great performance. It's not designed for that, more to be a physically accurate path tracer (as the name suggests). In order to get anything remotely near real-time, you'll need some bad ass acceleration structures and better command of the light model you are using. If you really are interested your probably need to write your own renderer. Look into Metropolis Light Transport (MLT) and Vertex connect merge (VCM), which are much more refined/efficient models using Monte Carlo method.
Plus some pretty decent hardware with lots of cores, or a decent gfx card if wish to employ SIMD through Cuda or equivalent.
[EDIT] Also note that the pbrt renderer, is based on a book "Physically Based Rendering (From Theory to Implementation)" ISBN-13: 978-0123750792. Which outlines how to implement your own version of pbrt.

Differences between Sprite and Overlay

Im on a task debugging the display plane configuration. In the code, I came across display-planes, sprite-planes and overlays. According to my knowledge Overlay constitutes the video data (for example) and the sprite will be the plane (in Hardware) that displays that data (video) when running. But sometimes, instead of sprite/sprite-planes simply overlay/overlay-planes are used. So, confusion occurs here which one will be used when. I need the clarification.

How to make colours on one screen look the same as another

Given two seperate computers, how could one ensure that colours are being projected roughly the same on each screen?
IE, one screen might have 50% brightness more than another, so colours appear duller on one screen. One artist on one computer might be seeing the pictures differently to another, it's important they are seeing the same levels.
Is there some sort of callibration technique via software you can do? Any techniques? Or is a hardware solution the only way?
If you are talking about lab-critical calibration (that is, the colours on one monitor need to exactly match the colours on another, and both need to match an external reference as closely as possible) then a hardware colorimeter (with its own appropriate software and test targets) is the only solution. Software solutions can only get you so far.
The technique you described is a common software-only solution, but it's only for setting the gamma curves on a single device. There is no control over the absolute brightness and contrast; you are merely ensuring that solid colours match their dithered equivalents. That's usually done after setting the brightness and contrast so that black is as black as it can be and white is as white as it can be, but you can still distinguish not-quite-black from black and not-quite-white from white. Each monitor, then, will be optimized for its own maximum colour gamut, but it will not necessarily match any other monitor in the shop (even monitors that are the same make and model will show some variation due to manufacturing tolerances and age/use). A hardware colorimeter will (usually) generate a custom colour profile for the device under test as it is at the time of testing, and there is generally and end-to-end solution built into the product (so your scanner, printer, and monitor are all as closely matched as they can be).
You will never get to an absolute end-to-end match in a complete system, but hardware will get you as close as you can get. Software alone can only get you to a local maximum for the device it's calibrating, independent of any other device.
What you need to investigate are color profiles.
Wikipedia has some good articles on this:
https://en.wikipedia.org/wiki/Color_management
https://en.wikipedia.org/wiki/ICC_profile
The basic thing you need is the color profile of the display on which the color was seen. Then, with the color profile of display #2, you can take the original color and convert it into a color that will look as close as possible (depends on what colors the display device can actually represent).
Color profiles are platform independent and many modern frameworks support them directly.
You may be interested in reading about how Apple has dealt with this issue:
Color Programming Topics
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DrawColor/DrawColor.html
You'd have to allow or ask the individual users to calibrate their monitors. But there's enough variation across monitors - particularly between models and brands - that trying to implement a "silver bullet" solution is basically impossible.
As #Matt Ball observes calibrating your monitors is what you are trying to do. Here's one way to do it without specialised hardware or software. For 'roughly the same' visual calibration against a reference image is likely to be adequate.
Getting multiple monitors of varying quality/brand/capabilities to render a given image the same way is simply not possible.
IF you have complete control over the monitor, video card, calibration hardware/software, and lighting used then you have a shot. But that's only if you are in complete control of the desktop and the environment.
Assuming you are just accounting for LCDs, they are built different types of panels with a host of different capabilities. Brightness is just one factor (albeit a big one). Another is simply the number of colors they are capable of rendering.
Beyond that, there is the environment that the monitor is in. Even assuming the same brand monitor and calibration points, a person will perceive a different color if an overhead fluorescent is used versus an incandescent placed next to the monitor itself. At one place I was at we had to shut off all the overheads and provide exact lamp placement for the graphic artists. Picky picky. ;)
I assume that you have no control over the hardware used, each user has a different brand and model monitor.
You have also no control over operating system color profiles.
An extravagant solution would be to display a test picture or pattern, and ask your users to take a picture of it using their mobile or webcam.
Download the picture to the computer, and check whether its levels are valid or too out of range.
This will also ensure ambient light at the office is appropiate.

Capturing high-quality(300dpi) screenshots of QT-based app in Linux

I need to make a screenshot of my form created in QT designer. There are numerous approaches to do screenshots(gimp, import, etc..) but alt of them deal with same dpi as on my monitor(about 100dpi). This is quite enough to publish on web site, but 300dpi images are required for paper publications. Are there any ways to create 300dpi screenshots?
I don't think that the 300dpi requirement for publication applies to things like screenshots, where the data is inherently pixelated. It's meant for things like graphs that can and should be generated in a vector format.
Just get the best results you can, and only use screenshots for things that are absolutely necessary, and not, for example, commandline I/O or results graphs.
If the final images are being shown smoothed and blurry, either find settings in your PDF creator to prevent this, or manually blow up the image to a multiple of its original size to preserve the original sharp pixelation.
Painting can be done on any QPaintDevice, which includes QPrinter. If you wanted to, you could set up painting redirection to a given device, then have the widget repaint itself. This might give you the higher precision you desire. For more information, look on Qt's website for the Paint System overview, and also maybe look at the QPixmap::grabWidget functions.
You can not grab screenshot in a best resolution than the one of your monitor. DPI has no sense in computer display. Some software convert pixel per point (ppp) to dot per inch (dpi) for paper publication.
Once you have made your screenshots, you can convert them to 300 dpi using a software like photoshop or equivalent.
You can't have more pixels on your screenshot than your widget displays.
For a given widget size (say 900x900px) you can have your image printed at 300dpi, but it will only make a 3 inch square on your paper.
You can force your screen to behave as a 4K display with the command:
xrandr --output eDP1 --rate 40.01 --mode 1366x768 --fb 4096x3072 --panning 4096x3072
remmember to fit the rate and the mode fields as stated from your default xrandr configuration. You can see that with xrandr
and then acquire the screenshot with
import -window root imagefile.png

Resources