I understand that in livecode you may do a database type application a) as a collection of cards as in HyperCard or b) with an SQL db engine like SQLite. In Hypercard (a) there was no need to save data entered into data fields. In livecode I need to use 'File/save' to save data in the development mode. How do I save data in a stand alone application which is cards based. Is this possible at all?
Yes, you can use the "save" command to save the state of your stack in code. To do this in a built standalone however, you need to do a little bit of wrangling by launching the stack with a launcher as detailed in this lesson:
http://lessons.runrev.com/s/lessons/m/4071/l/17375-how-do-i-save-custom-properties-in-a-standalone-application
The executable is never saved. The usual way LC standalones manage this is to create a "splash" stack, which may have a use or not, be visible or not, and contain useful data. Or not. But it is the executable.
And then as many other stacks, substacks and other resources are attached to that stack file as might be required, and all these can be saved. The reason the executable is called a "splash" is because it might appear as an intro window at startup, only to be dismissed in order to get the real work done.
Craig Newman
Related
I am required to write a very efficient application that will mirror the contents of an arbitrary external application multiple times (a lot of times) onto an area of my window, for Linux. On Windows, the way I used to do it was with the help of DwmRegisterThumbnail which would tell the compositor (Desktop Window Manager) that I want it to draw the thumbnail of that foreign window, which it anyway generates, onto a rectangle in my own window, when it composes the desktop image to be displayed to the user on the monitor. This is, I think, one of the lowest overhead ways to achieve my goal, on Windows. The goal is to have very minimal impact on the CPU, as the app will run on a pretty constrained machine. I never tested it against GDI or DirectX methods of copying the screen data, but I do not believe it is faster. Or maybe I am wrong, do correct me if so, please. Is there any other method faster on Windows? The limitations of this method include not being able to touch the actual image data, so no drawing on top of it for example, which is fine for my goal.
Now, my question is, what would be the best approach to achieve this on Linux? I have full liberty of choosing an appropiate X server, display manager if needed and also can write whatever software just to make it as low overhead as it is on Windows. Is there a similar API to the one on Windows for some Linux compositor, like Mutter or KWin, that works well? Or should I hook into X and copy image data from it? Would that eat a lot of CPU?
What's your experience and opinion? How should I take on this?
Thank you very much.
I'm aware of the existence of DDS files which allow programming of display graphics on the as/400, but is there another way?
Specifically, what I want to do is manipulate the terminal buffer directly to be able to display anything else than just text.
For example, the terminal looks like that:
Let's say, in memory, there would be a two dimensional char array: text[20][80] for the text menu and lower than that, there would be a pixel buffer array of size [200][800].
Is there a way to access either of those arrays directly?
I would like to be able to create a displayable menu entirely in C without the need of a display file and also display other kind of graphics (images) directly in the pixel buffer.
Is there a way to access either of those arrays directly?
That's easy enough, though a "display file" that has no formatted fields will still be needed. The 'file' will be the connection between the program and the physical device (or the emulator). You can define a single large area that contains whatever "text" you want your program to put into it. This can even include display field attributes that delimit input areas.
For the most control, the DDS USRDFN keyword is appropriate. But for simple stuff like lists of menu items, almost any large text field can be output to.
Outputting simple text is easy. For detailed stuff like USRDFN formatting, detailed understanding of the 5250 protocol is needed.
One kind of alternative would be to use User Interface Manager (UIM) APIs to update a PANEL's "text area" (:TEXT) via its USREXIT= application program. The UIM handles everything as far as any "display file" definition and actual I/O goes. The UIM can be thought of as a HTML interface for 5250 and uses a very similar markup language to define PANELs.
Another alternative is the Dynamic Screen Manager (DSM) APIs. These give much finer control than the UIM or DDS methods (though DDS USRDFN gets very close). But as with USRDFN, actual device control will require 5250 protocol knowledge.
...and also display other kind of graphics (images) directly in the
pixel buffer.
There is no "pixel buffer" for 5250 nor even 'pixels'. It's a character-based protocol, like telnet. If you're going for images or 'pixels', you're into browser interfaces, or perhaps Java and NAWT, or X-windows, etc.
Now, granted that with TCP/IP and sockets, you can do essentially anything that you're able to program. Whatever you can figure out how to do, including downloading/installing 3rd-party code libraries, you can do -- within the network restrictions surrounding your server. But it is in fact a server, so GUI kinds of apps generally shouldn't run on it. That's the same as for almost all types of servers. Code the GUI on the client system rather than the server. But you can do it if you really want to.
I'm not sure why you'd want to do this...
Now-a-days, it'd be much easier to simply generate your output as HTML and serve it up via the integrated apache web server.
But if you really want to do graphics via 5250, it can be done...theoretically at least. In 20+ years on the platform, I've never seen it.
But way back when (1994?), IBM added support for Graphical Data Display Manager (GDDM) and Presentation Graphics APIs into OS/400. "GDDM is a means of
displaying, printing, or plotting pictures. Presentation Graphics routines are a
means of displaying, printing, or plotting business charts."
The support is still in the OS. However, client side support is NOT available in IBM i Access for Windows or the most recently released client, IBM Access Client Solutions (ACS). It appears that the standalone IBM Personal Communications product may support GDDM.
For complete control of the character buffer, take a look at the Dynamic Screen Manager (DSM) APIs. The DSM APIs are "a set of screen I/O interfaces that provide a dynamic way to create and manage screens for the Integrated Language Environment® (ILE) high-level languages. Because the DSM interfaces are bindable, they are accessible to ILE programs only."
There is a way to do it in ILE C/C++. This was very fun to investigate since I haven't tried it myself.
The only documentation on it (page 183+) I could find is from 5.1, but you are able to cross reference the functions used to this 7.3 manual (possibly page vii/7) to see if they're still used the same.
Hope this helped!
I'm using Tampermonkey to collect data from pages I visit and store it locally and persistently across browser sessions. This is working fine. However, I'm limited to using the script on the same computer. I'd like to be able to use the same script on another computer and update the same collected data file.
I'd like to use the simplest method possible to read, edit, and store a single text file from multiple computers.
An actual functioning "Hello world!" script would be fantastic. I've mucked around with the pastebin API, but all the help applies to php code and there seems to be a lot of somewhat confusing overhead. I don't need to examine the contents of the pasted data in a useful editor. The data is never to be interpreted as code or html. I don't need an SQL database. This is just a project for fun, so I don't need to worry about privacy issues or elegant, modular code.
I just need a place to stash some bytes, and change them frequently.
What's the simplest solution?
I am working on a project where I need to run Google chromium over Linux FrameBuffer, I need to run it without any windowing system dependency ( It should draw on the buffer we provide it to draw, this will make its porting to any embedded system very easy) , I do not need its multi-tab GUI, I just need its renderer window in the buffer, has any body ever tried this? Any help on what approach should I use for this?
If you need to have some direct control of the window functions, or want to poke around in the DOM data, then the right way to solve this problem is to probably look at embedding webkit directly. This will be much faster and cleaner than what I am about to suggest.
Now, let's suppose you don't need all that fancy control and that you are really lazy. An ancient, low tech solution to your problem could be to create a virtual frame buffer and then read its contents directly. To do this, you can set up xvfb on your server:
http://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml
xvfb is an old unix tool that lets you create a virtual x-server with whatever type of configuration you want. More importantly, it can be configured to write the contents of its X server's screen directly to a memory mapped file! You can also set it up to use shared memory, which is a bit faster though also more complicated.
I guess you will have better luck with uzbl and GTK/DirectFB. Same engine, and works with javascripts. For the facebook chat issue, I think you just have to change the user-agent string.
There is the Origyn Web Browser, which is supposed to be an embedded WebKit-based browser that looks portable and does not depend on "heavy" libraries (like GTK). Their web page is http://www.sand-labs.org/owb but it looks like their database crashed, which is a little worrying maybe.
try to port webkit engine to the netsurf framebuffer-based code.
HTH
You could buy one of the remaining 10 (or so) OGD1 boards.
http://en.wikipedia.org/wiki/Open_Graphics_Project
Then you can talk directly to hardware using libpci.
However you will still need code that draws a picture into a memory buffer.
I realize this answer is more a shameless plug.
But people who are interested in your question might want such a board.
I already have a board like this and it would help a lot if it got more exposure.
This project:
http://code.google.com/p/wkhtmltopdf/
Achieves that. It runs Webkit on a virtual display and captures the rendered output in form of PDF. You can customize that do do something else.
OR you can create a display with tigthvnc, and set DISPLAY variable so that Chrome renders in that display.
I suggest using the webkit2pdf package (which is available for many different Linux distributions). Then use fbgs which is a wrapper for the fbi frame buffer program, that displays PDF files right on the frame buffer.
I have an OLD server running DG/UX that will in the near future be unsupported. I have some character based oracle forms that need to be migrated off of this machine. Does anyone know what sort of migration strategy Oralce has for upgrading these Character Based reports. It doesnt have to be the newest version, it doesnt even have to be to a GUI version, but I do need to migrate to a supported OS such as linux.
The easy answer is to tell you to check out Migration from 6i to 10g.
Having done it before, what I think the much more useful answer is to tell you to rewrite those forms and reports from scratch. Probably in another tool - especially if you want to have a web interface, etc. rather than being hobbled by an ancient Java runtime.
There are products out there that will let you translate the old forms code into PL/SQL. Kumaran is an example of one, but I found it buggy and had to do a lot of hand editing of the code to get it work the same as the original.
As far as I'm concerned, the CUI is dead so you might as well go all the way to a GUI. The last time I was looking at it, there was almost no documentation for CUI forms and frequently things that worked in the GUI wouldn't work in the CUI at all.
There are some problems you may run into in converting CUI based forms applications to GUI.
Sometimes there is validation and special processing done when the user moves to the next or previous field/block/etc. When you switch over to a proper GUI, your user can skip those events by just clicking on another field. So you are left with two choices - #1 audit all of the forms or #2 disable navigation in the form with the mouse
Option #1 is less work than redeveloping but look at how much work we've already put into it.
Option #2 your users will HATE you and come after you with pitch forks and torches. They will perceive that they've got nothing of value for all the work you put into it. Then you will end up doing Option #1 anyway.
Sometimes a UI that works fine in (or is required by the limitations of) a CUI is just plain wrong and breaks the UI metaphor that users are used to working with in the rest of the GUI (e.g., a pop-up window with list that you have to select an entry in rather than pull down where you can just pick the right value directly)
When converted to a GUI the CUI may end up with different fonts, text sizes and other formatting defaults than a freshly written form (it did for me). So now either the whole set of forms has to be updated to follow Oracle's new default theme for forms/reports or every new form/report has to reverted back to the old clunky style you had before - or it will stick out like a sore thumb (and your users will want them all to be like the pretty one now).
Not the answer you wanted; huh. But you can use this as an excuse to get out of the Forms/Reports upgrade tread mill and maybe even clean up some of the hacks that have had to happen over the years.