How to install a software 5 times in linux? - multithreading

The software(let's name it SW) has complex dependencies. I do not know whether it is enough if I just install SW 5 times in different directories.
My motivation is multi-thread running. If only one SW is installed, common-parameters may be shared between different threads.
Could docker help? Then the five SW could be independent of each other completely.
But I want a way that the 5 SW really occupy disk space in the host. It is not just 5 containers are used.

It depends. Many complex free software can be configured to be installed with some specific name or suffix.
For exemple, many GNU programs (and some non-GNU ones) have a configure script produced by autoconf. In that case, try configure --help at first. You probably can use --prefix and/or --program-suffix
So, for an autoconf-ed program with some configure script, you could build it five times with five different --program-suffix strings. See also GNU stow.
For free software programs not using such configure scripts, you need to read their documentation and source code. Most of the time their documentation explains how to configure them for such purposes. But you can still improve their source code for your needs.
For proprietary programs, you should dive into their documentation, and discuss with the vendor (perhaps paying him to adapt his software to your needs).
BTW, your question is unrelated to multi-threading (e.g. to Posix threads).

Related

Program that runs on windows and linux

Is it possible to write a program (make executable) that runs on windows and linux without any interpreters?
Will it be able to take input and print output to console?
A program that runs directly on hardware, pure machine code as this should be possible in theory
edit:
Ok, file formats are different, system calls are different
But how hard or is it possible for kernel developers to introduce another executable format called raw for fun and science? Maybe raw program wont be able to report back but it should be able to inflict heavy load on cpu and raise its temperature as evidence of running for example
Is it possible to write a program (make executable) that runs on windows and linux without any interpreters?
in practice, no !
Levine's book Linkers and loaders explain why it is not possible in practice.
On recent Linux, an executable has the elf(5) format.
On Windows, it has some PE format.
The very first bytes of executables are different. And these two OSes have different system calls. The Linux ones are listed in syscalls(2).
And even on Linux, in practice, an executable is usually dynamically linked and depends on shared objects (and they are different from one distribution to the next one, so it is likely that an executable built for Debian/Testing won't run on Redhat). You could use the objdump(1), readelf(1), ldd(1) commands to inspect it, and strace(1) with gdb(1) to observe its runtime behavior.
Portability of software is often achieved by publishing it (in source form) with some open source license. The burden of recompilation is then on the shoulders of users.
In practice, real software (in particular those with a graphical user interface) depends on lots of OS specific and computer specific resources (e.g. fonts, screen size, colors) and user preferences.
A possible approach could be to have a small OS specific software base which generate machine code at runtime, like e.g. SBCL or LuaJit does. You could also consider using asmjit. Another approach is to generate opaque or obfuscated C or C++ code at runtime, compile it (with the system compiler), and load it -at runtime- as a plugin. On Linux, use dlopen(3) with dlsym(3).
Pitrat's book: Artificial Beings, the conscience of a conscious machine describes a software system (some artificial mathematician) which generates all of its C source code (half a million lines). Contact me by email to basile#starynkevitch.net for more.
The Wine emulator allows you to run some (but not all) simple Windows executables on Linux. The WSL layer is rumored to enable you to run some Linux executable on Windows.
PS. Even open source projects like RefPerSys or GCC or Qt may be (and often are) difficult to build.
No, mainly because executable formats are different, but...
With some care, you can use mostly the same code to create different executables, one for Linux and another one for windows. Depending on what you consider an interpreter Java also runs on both Windows and Linux (in a Java Virtual Machine though).
Also, it is possible to create scripts that can be interpreted both by PowerShell and by the Bash shell, such that running one of these scripts could launch a proper application compiled for the OS of the user.
You might require the windows user to run on WSL, which is maybe an ugly workaround but allows you to have the same executable for both Windows and Linux users.

How is programming in rtems different than Linux?

I am new to programmming in rtem and was wondering how are the two, rtems and linux, are different in terms of programming. I understand rtems is an real time operating system but if you were to make a hello world app, wouldn’t the program be the same?
Note that your question is quite generic. There are a lot of detail differences.
One of the biggest is the format of your binary: Most RTEMS binaries are statically linked together. You only have one big binary containing your system and application. There is some dynamic loading supported but it's not the case used by most users.
As already mentioned my n.m. in the comments RTEMS has a lot of the POSIX API (at least the embedded sub set). So you can use a lot of the same API like you do on Linux.
A big differences is that RTEMS has a global address space (on most targets). So you don't have a separation between tasks. That makes pointer errors a bit harder to debug.
Also a difference: Most embedded systems are targeted for long running applications. In such applications (regardless whether you are on Linux or on RTEMS or on any other system) you should be careful to clean up your stuff (close files, free memory, ...). In Linux (or other desktop class systems) you have processes and the kernel cleans up all resources after your process exits. Although you can create threads in RTEMS no one cleans up after a thread exits.
The POSIX attribute defaults for threads are not specified in the standard and may vary between RTEMS and Linux.

What is the recommended environment for running multiple casperjs instances?

I am new to casperjs and planning to use it to accurately simulate anywhere from a few dozen to low hundreds of concurrent sessions accessing a private server on a private network. Unlike typical HTTP load generators (Apache bench, httperf, ...), my purpose is to be able to control each session programmatically (increase delay between requests, have 'smarts' built into each script) and have each session have distinct source IP addresses.
My current thinking is to use OpenVZ containers (openvz.org) to create each 'virtual' client running casperjs (minimal functionality I need is following elements on the UI and taking screenshots). Would love to hear of anyone who has done something similar.
The crux of my question is: what would the 'slimmest' environment for running casperjs be? I'd like to strip down the OS as much as possible to be able to scale multiple clients. Specifically:
any recommended low-footprint UNIX/Linux distributions for CasperJS?
any specific recommendations on stripping down mainstream (CentOS, Debian, ...) distributions?
Thank you all in advance. I look forward to hearing your input on this specific question or similar experiences/tools for what I'm trying to achieve...
Fernando
CasperJS is headless, e.g. it doesn't need X running to function. Any bare bones Linux distribution will do you well.
any recommended low-footprint UNIX/Linux distributions for CasperJS?
Arch is very lightweight and has an easy to follow Beginners Guide. Arch's AUR has a package for CasperJS that's pretty straightforward to setup as well. Just make sure to grab the required base-devel package (pacman -S base-devel) before installing from the AUR as it's needed for the Arch Build System.
any specific recommendations on stripping down mainstream (CentOS, Debian, ...) distributions?
Not so much stripping down, but CrunchBang is based off of the latest Debian release. It may be worth taking a look at. It would be much less of a hassle to setup than Arch, and uses the same APT package manager as Debian / Ubuntu. It installs with the lightweight OpenBox window manager, but you can remove this and X all together if you'd like.
With that said, even a lightweight Linux environment won't help much with the amount of memory each CasperJS instance will use. You could probably pull off a few dozen depending on the amount of memory available, but a few hundred may not be feasible. It all depends on how much memory each website uses. Casperjs comes with some configuration options that may help reduce memory (e.g. don't load images, plugins, etc), but that may defeat the purpose of your tests.
The best advice I can give is to try it out for yourself. Write a simple script that will open the pages you are going to use and pass a callback to CasperJS's run() function to keep it alive (e.g. don't exit from Casper). It can be as simple as:
casper.start('http://example.com/site1', function () {});
casper.thenOpen('http://example.com/site2', function () {});
casper.run(function() {
// wait 60 seconds before exit . . . or remove to never exit
setTimeout(function() { casper.exit(); }, 60000);
}
Spin up multiple instances, and watch your total memory usage. You can use the cli tools top, or use this alias that totals the amount of memory usage for the current user.
alias memu="ps -u $(whoami) -o pid,rss,command | awk '{print \$0}{sum+=\$2} END {print \"Total\", sum/1024, \"MB\"}'"
From this you should be able to see roughly how much memory each instance takes, and how many you can run at once on one machine.

Stripping down a kernel in linux?

I recently read a post (admittedly its a few years old) and it was advice for fast number-crunching program:
"Use something like Gentoo Linux with 64 bit processors as you can compile it natively as you install. This will allow you to get the maximum punch out of the machine as you can strip the kernel right down to only what you need."
can anyone elaborate on what they mean by stripping down the kernel? Also, as this post was about 6 years old, which current version of Linux would be best for this (to aid my google searches)?
There is some truth in the statement, as well as something somewhat nonsensical.
You do not spend resources on processes you are not running. So as a first instance I would try minimise the number of processes running. For that we quite enjoy Ubuntu server iso images at work -- if you install from those, log in and run ps or pstree you see a thing of beauty: six or seven processes. Nothing more. That is good.
That the kernel is big (in terms of source size or installation) does not matter per se. Many of this size stems from drivers you may not be using anyway. And the same rule applies again: what you do not run does not compete for resources.
So think about a headless server, stripped down -- rather than your average desktop installation with more than a screenful of processes trying to make the life of a desktop user easier.
You can create a custom linux kernel for any distribution.
Start by going to kernel.org and downloading the latest source. Then choose your configuration interface (you have the choice of console text, 'config', ncurses style 'menuconfig', KDE style 'xconfig' and GNOME style 'gconfig' these days) and execute ./make whateverconfig. After choosing all the options, type make to create your kernel. Then make modules to compile all the selected modules for this kernel. Then, make install will copy the files to your /boot directory, and make modules_install, copies the modules. Next, go to /boot and use mkinitrd to create the ram disk needed to boot properly, if needed. Then you'll add the kernel to your GRUB menu.lst, by editing menu.lst and copying the latest entry and adding a similar one pointing to the new kernel version.
Of course, that's a basic overview and you should probably search for 'linux kernel compile' to find more detailed info. Selecting the necessary kernel modules and options takes a bit of experience - if you choose the wrong options, the kernel might not be bootable and you'll have to start over, which is a pain because selecting the options and compiling the kernel can take 15-30 minutes.
Ultimately, it isn't going to make a large difference to compile a stripped-down custom kernel unless your given task is very, very performance sensitive. It makes sense to remove things you're never going to use from the kernel, though, like say ISDN support.
I'd have to say this question is more suited to SuperUser.com, by the way, as it's not quite about programming.

Building a custom Linux Live CD

Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch?
I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible.
The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot.
Can anyone fix me up with a current How-To?
Update:
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
There are a couple of interesting projects you could look into.
But first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk?
Okay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years).
Another project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte.
Then there is Morphix, which is a Live-CD builder toolkit based on Knoppix. ("Morphable Knoppix"!) Morphix is basically a tool to build your own special purpose Live-CD.
The last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale.
One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into.
This Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD.
If at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend.
Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys
Depends on your distro. Here's a good article you can check out from LWN.net
There is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice.
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example.
It's very easy to use.
sudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu
lh_config # edit config/* to your liking
sudo lh_build

Resources