as/400: other way for display graphics? - graphics

I'm aware of the existence of DDS files which allow programming of display graphics on the as/400, but is there another way?
Specifically, what I want to do is manipulate the terminal buffer directly to be able to display anything else than just text.
For example, the terminal looks like that:
Let's say, in memory, there would be a two dimensional char array: text[20][80] for the text menu and lower than that, there would be a pixel buffer array of size [200][800].
Is there a way to access either of those arrays directly?
I would like to be able to create a displayable menu entirely in C without the need of a display file and also display other kind of graphics (images) directly in the pixel buffer.

Is there a way to access either of those arrays directly?
That's easy enough, though a "display file" that has no formatted fields will still be needed. The 'file' will be the connection between the program and the physical device (or the emulator). You can define a single large area that contains whatever "text" you want your program to put into it. This can even include display field attributes that delimit input areas.
For the most control, the DDS USRDFN keyword is appropriate. But for simple stuff like lists of menu items, almost any large text field can be output to.
Outputting simple text is easy. For detailed stuff like USRDFN formatting, detailed understanding of the 5250 protocol is needed.
One kind of alternative would be to use User Interface Manager (UIM) APIs to update a PANEL's "text area" (:TEXT) via its USREXIT= application program. The UIM handles everything as far as any "display file" definition and actual I/O goes. The UIM can be thought of as a HTML interface for 5250 and uses a very similar markup language to define PANELs.
Another alternative is the Dynamic Screen Manager (DSM) APIs. These give much finer control than the UIM or DDS methods (though DDS USRDFN gets very close). But as with USRDFN, actual device control will require 5250 protocol knowledge.
...and also display other kind of graphics (images) directly in the
pixel buffer.
There is no "pixel buffer" for 5250 nor even 'pixels'. It's a character-based protocol, like telnet. If you're going for images or 'pixels', you're into browser interfaces, or perhaps Java and NAWT, or X-windows, etc.
Now, granted that with TCP/IP and sockets, you can do essentially anything that you're able to program. Whatever you can figure out how to do, including downloading/installing 3rd-party code libraries, you can do -- within the network restrictions surrounding your server. But it is in fact a server, so GUI kinds of apps generally shouldn't run on it. That's the same as for almost all types of servers. Code the GUI on the client system rather than the server. But you can do it if you really want to.

I'm not sure why you'd want to do this...
Now-a-days, it'd be much easier to simply generate your output as HTML and serve it up via the integrated apache web server.
But if you really want to do graphics via 5250, it can be done...theoretically at least. In 20+ years on the platform, I've never seen it.
But way back when (1994?), IBM added support for Graphical Data Display Manager (GDDM) and Presentation Graphics APIs into OS/400. "GDDM is a means of
displaying, printing, or plotting pictures. Presentation Graphics routines are a
means of displaying, printing, or plotting business charts."
The support is still in the OS. However, client side support is NOT available in IBM i Access for Windows or the most recently released client, IBM Access Client Solutions (ACS). It appears that the standalone IBM Personal Communications product may support GDDM.
For complete control of the character buffer, take a look at the Dynamic Screen Manager (DSM) APIs. The DSM APIs are "a set of screen I/O interfaces that provide a dynamic way to create and manage screens for the Integrated Language Environment® (ILE) high-level languages. Because the DSM interfaces are bindable, they are accessible to ILE programs only."

There is a way to do it in ILE C/C++. This was very fun to investigate since I haven't tried it myself.
The only documentation on it (page 183+) I could find is from 5.1, but you are able to cross reference the functions used to this 7.3 manual (possibly page vii/7) to see if they're still used the same.
Hope this helped!

Related

Standard way of determining placement of window frame controls

The More General Question
I am wondering if there is a standard way that operating systems / desktop managers use to expose the user's preference regarding the placement of the window frame controls (Close, Maximize/Miniaturize, Minimize).
For platforms like Windows and MacOS, it's "pretty" safe to assume that the users wants their window controls on the right and left respectively to match the rest of the windows in the GUI. But the key word here is "assume'. I hate to assume things when I code.
Furthermore, what about all the different Linux distributions and flavors?
I think this information could be useful to application developers in the same way that it's useful to know the user's preferences regarding dark or light themes.
My More Specific Question
Now, what I'm building currently is an Electron application that could really benefit from a custom title bar (a.k.a. a frameless window). And I do understand that my problem is caused by the fact that I want to bypass the window frame abstraction that is normally offered by the operating systems, but I'd really like to be able to position my custom controls in my title bar without having to guess.
But anyway, since I use Electron, I do have access to native features using NodeJS, but I'd also be curious to know if browsers have or are planning to implement a way for the CSS or JavaScript running in the browser to determine the intended placement of the window controls, again, similarly to prefers-color-scheme?

In IBM z are display files (dspf) like in IBM i?

In an IBM i/AS400 there are display files (DSPF) which is used to design / create screens.
DSPF (display file) in AS/400 is a file with definitions to format a screen to show and to receive data.
Are there similar files in zOS?
Screens on z/OS are specific to a subsystem.
In ISPF you would use Dialog Tag Language and/or panel definition statements to create a screen (ISPF calls it a panel).
In CICS you would use Assembler macros to create a BMS map (the screen).
In IMS you would use Message Format Services to create a screen.
As indicated in #SteveIves answer, there exist products to "paint" a screen.
All of the above are used to create 3270 screens; these days of course almost all 3270 devices are emulated. This is not the only way to create a user interface for a z/OS application. CICS, for example, understands http and it is relatively common to have a web interface to a CICS application.
There are no such files by default in z/OS. There are software products that have screen definitions - ISPF and Telon being 2. ISPF (Interactive System Productivity Feature) is the ‘default’ UI under TSO and you can create your own screens, but these are not dsp files.
Telon is (I think) some sort of screen layout/definition utility used to create applications running under IBM’s CICS. These are also not dsp files.
I won't duplicate #cshneid or #steve_ives answers but will provide some context. IIRC Display Files are more than screen mapping and include definitions for processing the data to be presented so they are more akin to a fuller programming paradigm than just mapping.
I'm not aware of a feature that incorporates both mapping of data and processing of files. This is accomplished based on the runtimes (CICS, IMS, etc.) for 3270 streams. Generally this is a combination of the mapping tools mentioned along with a programming language like COBOL, C, or other that are compatible with the runtime.
The closest I think you come is Dialog Manager in the TSO runtime which does have some data management capability but tends to be used for system level work and not user applications which are generally relegated to CICS, IMS, WAS, ...

In Vulkan how can you associate each individual video card with monitors they're directly connected to

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

Google Chrome over Linux FrameBuffer

I am working on a project where I need to run Google chromium over Linux FrameBuffer, I need to run it without any windowing system dependency ( It should draw on the buffer we provide it to draw, this will make its porting to any embedded system very easy) , I do not need its multi-tab GUI, I just need its renderer window in the buffer, has any body ever tried this? Any help on what approach should I use for this?
If you need to have some direct control of the window functions, or want to poke around in the DOM data, then the right way to solve this problem is to probably look at embedding webkit directly. This will be much faster and cleaner than what I am about to suggest.
Now, let's suppose you don't need all that fancy control and that you are really lazy. An ancient, low tech solution to your problem could be to create a virtual frame buffer and then read its contents directly. To do this, you can set up xvfb on your server:
http://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml
xvfb is an old unix tool that lets you create a virtual x-server with whatever type of configuration you want. More importantly, it can be configured to write the contents of its X server's screen directly to a memory mapped file! You can also set it up to use shared memory, which is a bit faster though also more complicated.
I guess you will have better luck with uzbl and GTK/DirectFB. Same engine, and works with javascripts. For the facebook chat issue, I think you just have to change the user-agent string.
There is the Origyn Web Browser, which is supposed to be an embedded WebKit-based browser that looks portable and does not depend on "heavy" libraries (like GTK). Their web page is http://www.sand-labs.org/owb but it looks like their database crashed, which is a little worrying maybe.
try to port webkit engine to the netsurf framebuffer-based code.
HTH
You could buy one of the remaining 10 (or so) OGD1 boards.
http://en.wikipedia.org/wiki/Open_Graphics_Project
Then you can talk directly to hardware using libpci.
However you will still need code that draws a picture into a memory buffer.
I realize this answer is more a shameless plug.
But people who are interested in your question might want such a board.
I already have a board like this and it would help a lot if it got more exposure.
This project:
http://code.google.com/p/wkhtmltopdf/
Achieves that. It runs Webkit on a virtual display and captures the rendered output in form of PDF. You can customize that do do something else.
OR you can create a display with tigthvnc, and set DISPLAY variable so that Chrome renders in that display.
I suggest using the webkit2pdf package (which is available for many different Linux distributions). Then use fbgs which is a wrapper for the fbi frame buffer program, that displays PDF files right on the frame buffer.

Embeddable customizable graph editor (Java, Flash, HTML+Javascript)

I have an application that uses intricate graph-like structure as a configuration. The application itself resembles a NetGraph- or netfilter firewall, thus graph nodes have types and properties (which correspond to operations) and they're interconnected with directed edges.
I'd like to have an easy-to-user configuration editor for my application that provides visualization and editing for configuration as a graph.
In my dream scenario, application would receive this configuration as a file in one of popular graph formats (for example, TGF, DOT, GraphML, etc), parse it and use.
A few requirements (not really strict, I'm open to consider various options) - graph editor should be:
available to be embedded in web UI - i.e. implemented in Javascript/HTML, Flash or as a Java applet
able to load TGF-style graph (i.e. without layouting instructions, nodes would be without coordinates) and lay it out in a somewhat decent automatically
able to save this graph back
able to load/save using requests to HTTP server, not a file directly
customizable to make it work with strict set of node types (so that user can't just create arbitrary node type or arbitrary properties for a given node)
open-source
So far I've found yEd and it's Flash version, Graphity - both look cool, but they aren't customizable (to strip them to bare-bones functionality, i.e. creation of one a few node types) and not open source, so embedding them anywhere pledges to be somewhat painful.
Another option I'm considering is trashing the whole "visual editor" idea and make user just write down bare TGF or DOT-style definitions in a plain text file and visualize them for later checking using something like GraphViz. Is it a viable way to go?
Have you looked at InfoVis? In particular, the force-directed layout and editing may be applicable. Graph source data is analogous to DOT, albeit in json format. No layout info in the source data though.
EDIT: There's also ProtoVis which is similar.
hth.

Resources