Calling an executable like a script - linux

I'm working on a HTTP/1.1 server in C as a learning experience and want to make it performant while still being dynamic. Performing a get or post on static files or scripts was easy enough, but I'd like to add the ability to call compiled binaries for greater speed.
Currently, I link these compiled binaries directly into the server binary, but I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries, but I don't want to relink them to handle every request. I also considered creating a new process to run them, however that incurs significant overhead every request and makes getting the response back to the client difficult (I'm using OpenSSL sockets).
How could I efficiently relink these compiled binaries when they update, without shutting down the server?
I'm testing on Debian Sid and running on an AWS ECS instance with CentOS 7. Both have Linux kernel versions 4.19+

I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries
You appear to believe that you can update a shared library (on disk) that a running server binary is currently using, and expect that running server process to start using the updated library.
That is not how shared libraries work. If you try that, your server process will either crash, or continue using the old library (depending on exactly how you update the library on disk).
This can be made to work in limited circumstances if you use dlopen to load the library, and if you can quiesce your server, and have it dlclose to unload the previously loaded version, and then dlopen updated version. But the exact details of making this work are quite tricky.

Related

Linux tool to create static binaries from dynamic apps

A while back I remember using a tool (similar to upx) that would bundle a dynamically linked binary with all of it's various .so dependencies, along with a simple pre-launcher to intercept dlopen type calls, all into a single binary executable (great for transferring an application to a remote system where you couldn't guarantee dependencies, frequently done with scheduling/queueing systems in HPC environments).
Can someone help me find this program again?
CDE (Code, Data, and Environment) is a tool that Automatically create portable Linux applications.
Git repo: https://github.com/pgbovine/CDE

Sandboxing code on a Linux machine

I'm in the process of writing an application in C++ on Linux. The goal is to have it load dynamically linked libraries at run-time and to provide all the services that the libraries require. The main aim is to have it act as a black box where code loaded at run-time can not break out and damage the rest of the system.
I've never done anything like this before and am a little lost of the best method to take. If I load all the dynamically linked libraries under a special process and then use something like SELinux to limit the ability for the central daemon to do anything outside of its requirements would that seem like a reasonable solution?
The reason I ask is that I want to allow people to load code into this container application that then handles all the server side stuff for them, so things such as security, permissions, networking, logging etc are all provided with a simple, clean and cross platform API regardless of the version of UNIX that the container is running on.

How to blacklist a shared library to avoid it being fetched by the loader?

I'm trying to force a build internal pre-processor used for built-sources to not rely on shared libraries installed in my host machine without having to uninstall them.
Although there is a LD_PRELOAD environment variable which forces the loader (ld-linux) to fetch the specified shared libraries before anything else, I'd like to do quite the opposite, forcing the loader not to fetch the specified libraries during the setup process (kind of LD_NEVERLOAD variable).
Is there some way to do so without breaking my entire system (aka, removing such libraries)?
PS: I've renamed my system libraries to test this specific use case, but this is definitely not an elegant way of doing so.
Reading the manual pages ld(1) and ld.so(8) you might try playing with LD_LIBRARY_PATH, LD_RUNPATH and options in both manuals which are related to "rpath".

Target domain of node.js

I am just wondering how node.js is compared to other frameworks. Is it possible to develop rich internet applications using node.js? How is it compared to java NIO?
In short I am looking for the target domain of Node.js
not sure why there are people voting to close this question when I think it's perfectly viable, Node.JS is a new server side framework that is still undergoing heavy development.
to answer your question may be a little difficult for myself as I know nothing of Java, but I know a little about Node and use it on a regular basis whilst it's under development.
Node JS is basically a framework built up of several components that are built for speed, such as Google's Javascript Engine (V8), it was originally designed for Google Chrome but released as an Open Source project.
Many developers have taken V8 and placed it on the server, combining it with custom libraries integrated into V8 to allow File I/O and network access.
So what is Node.JS
Node JS is basically Googles V8 javascript engine as the language platform, Mixed in with Lib Event, which is a technique of using 1 thread to perform multiple tasks by creating Events from the kernel.
The primary usage of Node it's it's networking functionality, Ryan has contributed a really powerful HTTP Library that has helped it take of with Web Services, which is what it's main intention is for.
Why do I use NodeJS
I like Node JS simply because it's easy, fast and very modular, being able to supply information such as Files, Images, Text to a web browser directly from the Servers Memory (RAM) in under 10 lines simply helps understand the power behind it.
For instance, Nearly every web-browser makes a request for favicon.ico, which is usually ~10KB, Now if i had 100 Requests per second and every request was requesting my favicon, my hard drive would have to locate that file, blocking all other reads in the mean time.
I can just load the data, store it within a variable and send it to every client much much faster then the traditional methods.
What's the best part about Node.JS
The best part about node.js personally is the concept, the idea of being able to search thousands of clients concurrently without blocking any other client is the drive behind the speed, every thing is speed motivated, hence Google V8, it's called V8 for a reason, Lib Event, it's removes the requirement for loads of threads, which can be heavy on resources.
Getting Started
I seems like you have not really had a play with Node.JS, and if you have not then is suggest you isntall it and have a play for a few days, Join there IRC Chat and speak to some of the guys over there, there is usually a member of the immediate team there that will help you.
You can simply install node.JS on Ubuntu like so (In Bash):
if you do not have git
sudo apt-get install git-core
install node JS:
cd /etc/
sudo git clone git://github.com/joyent/node.git
cd node
sudo ./configure
sudo make
sudo make install
to test make sure you have it installed
node --version
if you get the version your ready to go, go to your home directy
cd ~/
mkdir Nodes
cd Nodes/
create a simple file in you ~/Nodes Directory called test.js and start away, you can run the code lie so:
cd ~/Nodes
node test.js
I had written that small guide to setting NodeJS up not just for yourself but for others who may read this and would like to set things up.

sqlite use in tcl script over nfs (or.. how to make standalone sqlite3 which can be run over nfs)

I want to use an embed an sqlite database into an existing tcl application (migrated from flat-file).
Currently; our tcl interpreter is run from a network location; <nfs share>/bin/tclsh8.3
I do have an nfs $PATH for executables set for all users already; I am assuming I can place a standalone sqlite3 executible there; though I have been not found an easy way to compile a local lib independent sqlite yet... (all linux clients, running anything from red hat 9 to ubuntu 10.04)
Anyone able to poke me in the right direction in building an sqlite3 standalone binary I can use in my nfs tcl install?
First off, Tcl 8.3.5 is ancient and relatively slow. Upgrading to 8.4 or 8.5 (if you can) will see performance improvements (exactly which is faster depends on what your script is doing).
Secondly, SQLite support is typically done through a loadable package (conventionally called sqlite3). You'll want to get a build that works with the oldest version of Tcl you're supporting (simply compiling in stubbed mode – I think that's default anyway – against the oldest Tcl version you want to support should be fine) and then put that in a directory (as a full install of the package) that you have on your code's auto_path. One of the nicest ways to do this is to get hold of a tclkit single-file-executable for Linux and to embed the sqlite3 package together with your in that; that's good because you can get everything together in one place so that users can't disrupt things easily by setting environment variables or other such nonsense. There is more help on doing this at https://stackoverflow.com/questions/1379577/…
Thirdly, mixing databases and NFS (or any other networked filesystem) is not recommended due to the difficulties with distributed locking and ensuring a proper flush to permanent storage during transaction COMMIT. Ensuring data integrity under those conditions is just plain difficult!

Resources