Parallel solving in Minizinc from the command line - multithreading

The Minizinc IDE has a parallel solver option ("Number of threads") in the config section. When compiling from the commandline, however, the mzn2fzn binary doesn't seem to support a parallel option. Is it possible to solve in parallel from a commandline-compiled file?

You can either use MiniZinc via the integrated development environment (IDE) or via commandline call. I am using IDE 2.0.8
In the IDE, use the configuration tab to specify the number of threads to be used for searching/solving. Depending on the selected backend, you may end up with an error message, that multi-threading is not supported for the respective backend.
Via commandline, you can either call compiler and backend separately, or you can use minizinc.exe to act as an umbrella tool to call them sequentially. All the tools have a commandline option --help to explain the parameters. minizinc.exe accepts -p or --parallel to run the backend in multi-threading mode, provided this is supported.

Related

Is there a way to automatically test interactive command line (console) Linux app?

I am one of the developers of a console two-pane file manager for Linux (this is a port of Far Manager, called far2l), the application interface resembles Midnight Commander. I am faced with the need to implement automated testing. Can you please tell me which application or framework can be used for this?
I need the ability to write some scripts containing a sequence of keystrokes that will be transmitted to the console application (the ability to specify delays between emulated keystrokes also needed), as well as the ability to automatically analyze application interface drawn in the console, for example, for the presence of certain strings. And some kind of a framework to run a number of such tests automatically and generate testing reports.
Most console application testing tools I could find (like "cram", "cli-unit", "aruba", or "exactly") unfortunately don't seem to be designed specifically for testing interactive applications.

Linux SystemD Services - Simple vs Forked - Downsides?

A lot of programs you download can be run in a blocking manner or in the background (usually by start/stop/etc commands). Some good examples are HA Proxy and Spring Boot apps built to be Linux services... both can be run in either manner.
In system-d unit files you can use "forked" type to allow you to map to start/stop/etc commands for managing a program that runs in the background/as a daemon. Alternatively, you can just use the "simple" type and call the app itself in a blocking manner.
Is there any particular reason to prefer "forked" where it is an option? Having done both options on numerous things, it seems "simple" is lighter on config and more obvious in terms of usage.
This is answered in https://www.freedesktop.org/software/systemd/man/daemon.html section "sysv daemons" there are mostly only downsides of choosing the "forking" method, because most software out there, DO NOT perform the "15 steps" either correctly or at all, in particular, steps 12 and 14 are seldom correctly implemented.

Using puppet to build from source

How can I use puppet to build from source without using multiple Exec commands?. Do we have modules for it on forge that I could use?
It's possible to use Puppet to build applications from source without using execs, possibly with a custom written type and provider. Otherwise, yes, it'd have to be a few different exec resources with onlyif, creates etc. statements to stop them running every time the agent ran.
Puppet's model of configuration management is known as a desired state model: you define the end state of the system and let the system. This is why exec's are generally avoided in Puppet: they don't fit a desired state model. It also makes things like updating the application, or dealing with unknowns like a partial failure of the compilation that creates a required file.
In my opinion, I would not recommend using configuration management to build applications from source at all. There are a few issue inherent with doing so (this is not just for Puppet, but most config management languages):
Slower runs, as running the compilation can be longer and detecting that it's complete is normally a slightly trickier tasks
Issues with half complete state or failure: if the compilation breaks halfway through it's both harder to detect and resolve
Making the compilation idempotent: You have to wrap the command in logic that detects if the installation has already been done. However, this is difficult, as things like the detection of a flag file or particular binary could occur even when the compilation ends in failure
Upgrading or changing: There's no easy way to upgrade or change the application. A package would be easier to do this with.
This sounds like something that would be better served by packaging, using tools such as FPM or just native package building tools such as rpmbuild.

How to be able to "move" all necessary libraries that a script requires when moving to a new machine

We work on scientific computing and regularly submit calculations to different computing clusters. For that we connect using linux shell and submitting jobs through SGE, Slurm, etc (it depends on the cluster). Our codes are composed of python and bash scripts and several binaries. Some of them depend on external libraries such as matplotlib. When we start to use a new cluster, it is a nightmare since we need to tell the admins all the libraries we need, and sometimes they can not install all of them, or they only have old versions that can not be upgraded. So we wonder what could we do here. I was wondering if we could somehow "pack" all libraries we need along with our codes. Do you think it is possible? Otherwise, how could we move to new clusters without the need for admins to install anything?
The key is to compile all the code you need by yourself, using the compiler/library/MPI toolchains installed by the admins of the clusters, so that
your software is compiled properly for the cluster hardware, and
you do not depend on the admin to install the software.
The following are very useful in this case:
Ansible, to upload/manage configuration files, rc files, set permissions, compile your binaries, etc. and deploy a new environment easily on new clusters
Easybuild to install your version of Python with all the needed dependencies, and install other scientific software thanks to the community supported build procedures
CDE to build a package with all dependencies for your binaries on your laptop and use it as-is on the clusters.
More specifically for Python, you can use
virtual envs to setup a consistent set of Python modules across all clusters, independently from the modules already installed; or
Anaconda or Canopy to use a Python scientific distribution
to have a consistent Python install across all clusters.
Don't get me wrong, but I think what you have to do so: stop behaving like amateurs.
Meaning: the integrity of your "system configuration" is one of the core assets of your "business". And you just told us that you are basically unable of easily re-producing your system configuration.
So, the real answer here can't be a recommendation to use this or that technology. The real answer is: you, and the other teams involved in running your operations need to come together and define a serious strategy how to fix this.
Maybe you then decide that the way to go is that your development team provides Docker buildfiles, so that your operations team can easily create images on new machines. Or you decide that you need to use something like ansible to enable centralized control over your complete environment.
That's what venv is for, it allows you to create a portable customized environment easily, with exactly what you need and nothing more.
I completely agree with https://stackoverflow.com/users/1531124/ghostcat
but here is the really bad answer that will cause you a lot of problems in near future!!!:
if you need some dynamic library and you are not planning to upgrade them in future, you can try copying all needed libs to a folder in your app and use an script to launch the app:
#!/bin/sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/your/lib/folder
./myAPP
but keep in mind that this is bad practice.
Create a chroot image, like here - click. Install everything you need and then you can just chroot into it on any machine.
I work on scientific clusters as well, and you are going to find that wherever you go.
I would only rely on the admins on installing the most basic stuff. That is:
- Software necessary to build your software or run the most basic stuff: compilers and most basic utilities (python, perl, binutils, autotools, cmake, etc.).
Software libraries that make use of I/O devices: MPI, file I/O libraries...
A queue system (they already have it most of the time).
Environment modules. This is not a must, but it really helps you get the job done, specially if you mess with different library versions or implementations (that's my case, for example).
From that point on, you can build and install on your own directories all the software you use most of the time.
This does not mean that you cannot ask an admin to install some libraries. If you feel that many people is going to benefit from that, then you should request its installation. In addition, you may need some specific version or some special features which are not used most of the time, but you really need them. A very good example is with BLAS libraries (basic lineal algebra subroutines):
You have lots of BLAS implementations available: the original BLAS, Intel MKL, OpenBLAS, ATLAS, cuBLAS
If that is not enough, the open source versions usually offer multiple configuration options: serial version, parallel version with PThreads, parallel version with OpenMP, parallel version with MPI...
In my particular case, most of the software that I felt was necessary for many users in the cluster ended up being installed by the admins without any problem (either me or other users requested it), but you also have to keep in mind that in a cluster there can be many users and a single person/team is not able to attend the specific requirements you need, specially if you are able to do so.
I think you want to containerize your application in some way. Two main options (because docker/rkt and similar things are way too heavyweight for your task if I understand it correctly) in my opinion are runc and snappy.
Runc relies on OCI runtime specification, you need to create an environment (that is very similar to chroot environment in that you need to copy everything you software uses in one directory) and then you'll be able to run your application with runc tool. Runc itself is just one binary, at the moment it requires root privileges to run (hello, cluster admins), but there are patches at least partly solving that, so if you build your own runc and there are no blocking things wrt root privilege requirements you may be able to run your application with no administration overhead at all.
Snappy is similar in that you need to prepare a snap package for your application, this time using snapcraft as an assistant tool. Snappy is probably a bit easier in creating an application image and IMO is certainly better for long-term support because it clearly separates your application from the data (kinda W^X, application image is a read-only squashfs file and application can only write to a limited set of directories). But at the moment it will require your cluster admins to install snapd and to perform some operations like snap installation that require root privileges. Still, it should be better than your current situation, because that's just one non-intrusive package to install.
If these tools don't fit for some reason, there is always an option to make something of your own. That won't be easy and there are many subtle details that can bite you when doing that, but it can be done, compile all of your dependencies and applications into some path, create wrapper scripts to set up PATH and LD_LIBRARY_PATH environment for your components and then bring that directory into the new cluster, run wrapper scripts instead of target binaries and that's it. It's similar to what XAMPP does, they have quite a number of integrated things packaged into one directory that works across many distributions.
update
Let's also add AppImage into the mix, theoretically it can be a savior for your case, as it specifically does not require root privileges. It's kinda inbetween Snappy and rolling your own, as you need to prepare your application directory yourself (snappy can manage some of dependencies with snapcraft when you just specify "I need this Ubuntu package"), add appropriate metadata and then it can be packaged into single executable.

Testing automation tool suited for operation team

I would like to start using an testing framework that does the following:
contains an process(the process can be a test) management engine. It is able to start processes(tests) with the help of a scheduler
it is distributed, processes can run locally or on other machines
tests can be anything:
simple telnet on a given port (infrastructure testing)
a disk I/O or mysql benckmark
a jar exported from Selenium that does acceptance testing
will need to know if the test passed or not
has the capability to get real time data from the test(something like graphite) -- this is optional
allows processes to be build in many programing languages: perl, ruby, C, bash
has a graphical interface
open-source
written in any language as long as it doesn't use resources , I would prefer C, perl or ruby
to run on linux
What not to be:
an add on to a browser: Selenium, BITE ..
I do not want something focused on web development
I will like to use such a tool or maybe collaborate on building one. I hope I was explicit enough. Thank you .
You might want to look at the robot framework combined with jenkins. Robotframework is a tool written in python for doing keyword-based acceptance testing. Jenkins is a continuous integration tool which allows you to schedule jobs, and distribute jobs amongst a grid of nodes.
Robotframework tests can do anything python can do, plus a whole lot more. It has a remote interface, which means you can write test keywords in just about any language. For example, I had a job where we had keywords written in Java. In another job we used robotframework with .NET-based keywords. This can be accomplished via a remote interface (so you can write keywords in many different languages) or you can run robot using jython to run on the JVM, or iron python to run in a .NET environment.

Resources