GUI/TUI linux library - linux

Is there any UI library that can be to build both a text user interface (ncurses) and graphical user interface (GTK? QT?) from the same source?
I know that debconf can be used with various frontends, I would like to build something similar but programmable.

The library that powers YaST independence to do ncurses, gtk and qt with one codebase provides what you are looking for, and it is not tied to YaST itself.
Actually libyui only requires the standard C++ library and phtreads (IIRC). The UI plugins require of course the respective libraries (Qt, ncurses). YaST uses libyui via a set of YCP bindings that export a YCP like API on top of libyui.
The library is a bit lowlevel (one layer below an event loop), my colleage Klaus Kämpf wrote about using it some time ago in his blog, including binding to scripting languages it using swig.
The only part that is SUSE specific is the packaging, so you would need to package it yourself. Stackoverflow did not allow me to link more than once. The code of the library is linked from Klaus blog. Replace libyui for "qt" and "ncurses" for the plugin's code.
Also google for "YaST Independence From YCP" to find a blog entry from Andreas Jäger on the subject.

you could write your program to uses ncurses, and then use PDCurses to convert it to an X11 application - as the readme advertise.
I know it because I've used it as portable curses, though I've never tested its X11 capabilities

Not exactly a library but you could consider writing a web app that degrades well to Lynx

The GoboLinux guys have created their own toolkit for python called AbsTK, they use it for their installer, which actually works really good. I have never used the toolkit myself, but the apps built with it seems solid.

There's Cursed GTK, but it seems a bit dated. I found some references to a port of Qt to ncurses called Qt Console, but it seems to have disappeared.

By using a library that targets both the text-mode and GUI environments, you have a big risk of getting stuck with the worst of both worlds.
You will be better off structuring your code using the MVC pattern, and providing separate views and controllers for each platform you target. Pushing all the logic down to the model classes has several other benefits:
The code will be easier to test because you are forced to keep the user interface out of the actual domain logic.
Your program can have user interfaces that have very little in common, e.g. a web UI, or an UI driven by speech.
You can run the program easily with no UI at all (i.e. script it) by accessing the model classes directly in the same way that the controller classes do.

I think what's used for configuring the linux kernel when compiling is dialog/cdialog/xdialog. But it's been a while since I've compiled a kernel, so my memory may be off. The most promising link I can find is this one for Xdialog.

Maybe tcl/tk would provide what you want http://www.tcl.tk/
Here's the page on interfacing with curses. There is a claim there of integration with ncurses.
http://www2.tcl.tk/2372

Related

Can you embed GraalVM application in a browser?

GraalVM has so many surprising capabilities. But one thing I haven't seen, but would like to, is to be able to run a GraalVM application in a browser. Sources like this (Top 10 Things To Do With GraalVM) shows interop with Node.js, but not running a compiled application in the browser.
Is this possible? If so, is there documentation on this? Thanks!
Well, it looks like this may be possible using Webassembly. From the Graal VM lead Thomas Wuerthinger: https://twitter.com/thomaswue/status/943592646915878912?lang=en
Webassembly is useful for statically typed languages (as LLVM
backend). I am not aware of any Ruby, R, or Python implementation
successfully targeting Webassembly. Graal VM will be able to run via
Webassembly in the browser. It also has a "native" mode with
standalone binaries.
So if you're coding in something like Clojure or Python and planning on compiling to Webassembly via Graal VM, you would likely run up against the same restrictions that Webassembly has, such as the browser sandbox and only being able to access web APIs. It will be interesting to see if those boundaries can be communicated through error messages or other compile-time checks.
It would be very interesting to see a browser that embeds GraalVM and can run its engine for languages, even if only for JavaScript initially.
Currently, there's no such browsers, as far as I know. Maybe an interesting first step would be to take Electronjs, and try replacing the version of node they use with the version of node.js from GraalVM. It's not trivial, since they introduce some changes to the stock node.js and GraalVM introduces some changes when replaces the JavaScript engine with its own implementation.
However, it definitely should be possible to achieve.

How does cycript / substrate work to hook into a process?

I am currently doing some research on techniques about hooking mobile applications and came across some frameworks like Xposed (Android), Frida (Android and iOS) and Cycript (iOS).
The documentation about Xposed and Frida is fairly good explaining how exactly they are doing it. Xposed states to manipulate the binary starting the Zygote process and loading an additional JAR file that assists in hooking the methods. Frida documentation explains that it uses ptrace (in Linux environments) to attach to a process, allocating and populating a bootstrapper that loads a thread to launch a .so file containing the frida agent, in a nutshell, if I understood it correctly.
I couldn't find useful documentation about the strategy that Cycript pursues. I know that it is built on top of Cydia Substrate that does the actual hooking. I couldn't find details about how exactly Substrate accomplishes this either.
I further understand that on iOS the objective-c runtime enables runtime manipulation as it is runtime-oriented.
Does anybody know how exactly Cycript / Cydia Substrate works to hook/inject into applications?
Thanks in advance.
It figured out that is apparently working by adding the DYLD_INSERT_LIBRARIES into the program's launchd manifest and thereby every time the application is started it loads the malicious payload by loading the dynamic library.
Still, are there other techniques how to perform runtime hooking / manipulations on Android and iOS?

Automated test tools for Linux/ncurses

I've picked up a legacy application developed in C/C++ on Linux, using ncurses for UI. What automated testing tools are there for this environment?
Edit: I've used AutomatedQA TestComplete in the past, and this is the type of tool I'm looking for - except running on Linux, and with the ability to test Text UI apps.
I wrote something like that before. Not much docs, but you can try the code. It's written in Python and runs on Linux.
You would basically need the ANSIterm filter, and the expect module. Then you compose them into a filter. You'll likely have to start the process with the proctools module. They are all designed to work together or separately (modular).
I have considered using Rational Function Tester and TestComplete.
RFT has explicit support for testing this type of application (text-mode linux) via built-in terminal emulation.
TestComplete does not support testing Linux apps directly, but can be made to work by "testing" a COM-enabled terminal emulation program (Attachmate Reflection at this stage), and using COM from the test scripts to do screen scraping.
Have also considered using Reflection as the terminal emulator and rolling my own test framework in C# and NUnit.
Edit: "Final" solution is using Terminator (a Java terminal emulator), extending it with an RMI interface and using TestNG...
The expect tool sounds like what you need: http://linux.die.net/man/1/expect
Have a look at the free version of TETware from the Open Group. It is a full test harness based on TCL.

What’s the best way to distribute a binary application for Linux?

I just finished porting an application from Windows into Linux.
I have to create an installer of the application.
The application is not open source => I should distribute the application's binaries (executable file, couple .so files, help files and images).
I found several methods to do it:
- RPM and DEB packages;
- installer in .sh files;
- Autopackage.
I don't like first method (RPM and DEB packages) because I don't want to mantain different packages for different Linux distros.
What is the best way to distribute a binary application for Linux?
Having been through this a couple of times with commercial products, I think the very best answer is to use the native installer for each supported platform. Anything else produces an unpleasant experience for the end-user, and in practice you have to test on every platform you want to support anyway, so it's not really a significant burden to maintain packages for each. The idea that you can create a binary that can "just work" on every platform out there, including some you've never even heard of, just really doesn't work all that well.
My recommendation is that you pick a platform or two to support initially (Red Hat and Ubuntu would be my suggestions) and then let user demand drive the creation of additional installation packages. Perhaps make it known that you're willing to support additional platforms, for a modest fee that covers your time and effort in packaging and testing on that platform. If a platform proves to be very different, you may need to charge more for ongoing support.
Oh, and I cannot overemphasize the value of virtual machines for scenarios like this. You need to build VMs for each platform you support, and perhaps multiple VMs per platform to make it easy to test different configurations.
There were a lot of good answers (mine included :)) here. Although that is more about binary compatibility (which you do need to worry about).
For installer I would recommend autopackage (we successfully released several versions of our software with it), they did the "installer.sh" part already and more (desktop integration for example).
You have to be careful and test your upgrade scenarios and stuff, depending on how complex you package structure is, but it is pretty neat overall. I fixed few bugs with dependency handling in 1.2.6, so it should be fine.
UPDATE: The original question was deleted, so reposting full answer here, ignore all references to autopackage, that was merged into Listaller, not sure if relevant parts survived.
For standard libraries (like crypto++, pthreads, etc) that are likely to be available in a distribution -- link dynamically and tell users to get them from their distro repository. Or link statically if it is feasible.
For weird libraries that you must control version of (if you want to deploy Qt4 app on territory of enemy gnomes for example), compile them yourself and install into a private spot only your app knows about.
Never install private libs into standard places unless you can be sure to not interfere with package systems of all distros you support. (and that they can't interfere with you either).
Use rpath instead of LD_LIBRARY_PATH, and set it properly for all you binaries and all dlls that reference each other. You can set rpath on you binary to "$ORIGIN;$ORIGIN/../lib;/opt/my/private/libs" and have linker search those places before any standard paths. (have to setsome linker flag for origin to work I think). Make sure to set rpath on your libs too: for example QtGui needs QtCore, and if user happens to install standard package with different version, you absolutely don't want it picked up (exe -> ../lib/QtGui.so (4.4.3) -> /usr/local/lib/QtCore.so (4.4.2) -- a sure way to die early).
If you compile with any rpath, you can change it later with chrpath, thus making it possible to tweak install location as part of post processing or install script.
Maintain binary compatibility. GLIB_C is pretty much static for your users, so you should link against some sufficiently old version. 2.3 is a safe bet. You can use APBuild -- a gcc wrapper that enforces GLIB_C version and does few other binary compatibility tricks, so you don't have to compile all you apps on a really old distro.
If you link to anything statically, it generally will have to be rebuilt with APBuild too, otherwise it is bound to drag newer GLIB_C symbols. All .so's you install privately will naturally have to be built with it too. Sometimes you have to patch third party libs to use older symbols. (I had to patch ruby to return real permissions instead of effective ones, since there is no such functions in older GLIB_C. Still not sure if I broke anything :)).
For integration with desktop environments (file associations, mime-types, icons, start menu entries, etc) use xdg-utils. Beware though, like everything on linux they don't really like spaces in filenames :). Make sure to test those things on each target distro -- xdg implementations are riddled with bugs and quirks.
For actual install you can either provide variety of native packages (rpm, deb and a few more), or roll out your own installer, or find installer that works on all distros bypassing native package managers. We successfully used Autopackage (same people who made APbuild) for that.
It's possible to install an RPM on Debian and an APT on RHEL.
If you are going to statically link this program, or dynamically link only with libraries that you will be distributing in the package, then it doesn't much matter how you distribute it. The simplest way is tar.gz and that would work.
OTOH if it is dynamically linked with system libraries, and particularly if it has dependencies on dynamic libraries that will be shared with the client's other applications, then you kind of need to do either RPM, APT, or both.
You may want to try out InstallBuilder. It is crossplatform (runs on Windows, Linux, Mac OS X, Solaris and nearly any other Unix platform out there). It is used by Intel, Motorola, GitHub, MySQL, Nokia/Trolltech and many other companies so you will be in good company :) In addition to binary installers, it can also create cross-distro RPMs and DEB packages.
InstallBuilder is commercial, but we offer free licenses for open source programs and very significant discounts for mISVs or solo-developers, just drop us a line.
Create a .tar.bz2 archive with the binary, then publish a feed for it, like this:
<?xml version="1.0" ?>
<interface uri="http://mysite/myprog.xml"
xmlns="http://zero-install.sourceforge.net/2004/injector/interface">
<name>MyProgram</name>
<summary>what it does</summary>
<description>A longer description goes here.</description>
<implementation main='bin/myprog'
id="sha1new=THEDIGEST"
version='1.0'>
<archive href='http://mysite/myprogram-1.0.tar.bz2'
size='10000'/>
</implementation>
</interface>
Sign it with your GPG key. You can use the tools on 0install.net to calculate the digest and add the GPG signature for you in the correct format.
Then, put it on your web-site at the address in the uri attribute. Any user on most Linux distributions (e.g. Ubuntu, Fedora, Debian, Gentoo, ArchLinux, etc) can then install and run your program with:
0launch http://mysite/myprog.xml
Their system will also check for updates periodically. There are various GUIs for the different desktop environments, but the command-line will work everywhere.
Also look at some of the existing feeds for inspiration.
I tell you an additional possibility, although I am not aware of its status: the Loki installer. Loki was a company doing videogames porting for Linux. It went down in 2002, but the installer is available.
InstallShield is also available for linux. No idea on the status though.
Although many people are proposing you to go with tar.gz, please don't. I assume you want to provide a pleasant experience for the installation procedure to your users. A tar.gz is one of the most low level, low quality, low usability choices you can do. It works everywhere because it does basically nothing, as you know.
The guys at freedesktop.org and the LSB are quite clear on where to put stuff. What you need is a friendly program to do that. Autopackage imho has the numbers (I love it), but despite its age, I haven't seen a single program out there distributed as an autopackage.
Evaluate it carefully, but don't skip the chance of being part of the momentum in favour of it, just because it's not popular. If it works for you, and it works for your users, everything else does not matter.
There is no best way (universally speaking).
tar.gz the binaries, that should work.
Today, I would also look at Snapcraft and Flatpak which are embraced by some popular distributions. I explored other options and it is what ended up working best for me. Flatpak in particular also helped me learn about standard Linux desktop conventions to follow.
You may also want to look at AppImage (https://appimage.org/). The concept is that it produces a single binary file that the user downloads, sets executable, and runs directly; no installation necessary, no dependencies to install (since the app image typically includes all the dependencies except basic stuff like glibc). This makes for a really great user experience!
Some downsides:
The image may be large, since it probably includes all files/libraries/... the app depends on.
As the image creator, you're responsible for security updates to any of the libraries you add into your image.
An AppImage is great for a user-run application that's pretty isolated from anything else on the system (i.e. daemons, system configuration, etc.), but if your app relies on things like udev integration, desktop file installation, dbus registration, etc. this isn't easy, since the apps files aren't available when the app isn't running (making udev rules hard), and there is by definition no installer that gets run (making desktop file installation hard).
I've also looked into this at work and I'd have to agree there really isn't a "best way". If your application is being distributed as source then I'd go with the make/configure methods packaged up in a tar.gz. That seems fairly universal in the Linux world.
A good way to get an idea of what to do is to look at larger organziation and see how they distribute their binaries.

Haxe in the field

I had a fresh look at Haxe again recently and realized that I had overlooked some of its elegance before. But I guess it lacks some visibility among the developers still.
So my question is, does anybody here use it for production? If so, how do you use it? What are the gotchas or difficulties you encounter? Do you recommend it for future projects?
I use Haxe to develop all my Flash applications, and I love it. I develop on Linux and with Emacs,
and I really like how I can make Haxe fit within my preferred development environment. I just use
simple Makefiles that look something like:
project.swf: Project.hx
haxe project.hxml
It's really easy to get started in Haxe, and it's very elegant. I've
had no problems at all using Haxe as compared to using the Adobe Flash
builders, and have developed a bunch of big projects including PanningPedagogy, The Orchive,Cantillion and Audioscapes.
I've released the source code to all of these as GPL on SourceForge, check them out at:
https://sourceforge.net/projects/panning/
https://sourceforge.net/projects/orcaannotator/
https://sourceforge.net/projects/cantillion/
https://sourceforge.net/projects/margridflash/
You might find some useful information in the lists of Projects Using Haxe and People Using Haxe.
My company uses Haxe for production use. For programming swf content is absolutly no problem on the technical side. Using it on the server side is a little bit harder. If you Haxe for PHP you sometimes have some problems with typing (this is more or less a PHP problem). The neko vm is very stable and very very fast but it takes some time to get it running with all you other server software (mysql, apache - mod_rewrite), but once you got it you it is very stable.
We used it for generation swf applications, tried the possibilies of Haxe JS. Also we created socket server for a multiplayer game and start to generate all our webpages with Haxe PHP or neko.
The community is very helpful, the documentation is sometimes a little bit to short.
This is only my opinion and the experiences I made.
For those of us who don't know what Haxe is, it's a programming language for developing web apps. It has multiple compiler targets (Flash, php, JavaScript, and the Neko language's VM)
Welcome to haxe [haxe.org]
Haxe entry on Wikipedia
Haxe are currently gaining more popularity as a cross-platform development (mainly for game development) tools thanks to NME/OpenFL: http://www.openfl.org/
Write once in Haxe and deploy it to Flash, Android, iOS, and more..
HaxeJS is very good for web production, it allows to use all the underlying js modules while giving extra abilities like pre-processor, typed fields, conditional-compilation, classes, haxe libraries, refactoring and auto-completion from IDE etc.. plus its very quick to compile and output ready-to-use js files.
I haven't tried microsoft typescript, but so far I've been using HaxeJS for both client and server (nodejs) on a few production projects and it feels a great choice. The only issue is if i want to share js libraries or npm modules with others, I'll probably need to rewrite the js by hand then.
We used it at a previous internship, for an internal web system. We only compiled to js and I just once compiled some minor code to both js and C#. I can say it worked quite well and many custom widgets were made at the time. Debugging the produced js wasn't that bad either, but it sometimes didn't produce the code you wanted it to (I remember one string comparison issue in js, where the reference was being compared instead of the value). The code was deployed in production and had worked fine for years. I'm pretty sure they still use it today.
That was in 2013, I haven't used it since. One problem I did have was trying to compile code made in version 2.08 using version 2.10. It needed some minor, but non-obvious adjustments. I can't quite comment on more recent releases, but I'd be a bit careful on not breaking large pieces of code by upgrading to new versions of the compiler.
You compile, haxelib run flow run "target" in target you type for example web, and thats all, in your bin, folder you get your files, remember to configure your project.flow file acording to your target and project.

Resources