Distributing Contents for Linux like in Android & iOS? - linux

I'm currently designing a Linux-based system. Users of the system will be allowed to download contents, i.e. programs, from the Internet. The contents will be distributed in zip packages given special extension names, e.g. .cpk instead of .zip, and with zero compression.
I want to give users the same experience found in iOS and Android, in which contents are distributed in contained packages and run from there.
My question is that can I make my Linux system to run programs from inside the packages without unzipping them? If not, then is there another approach to what I'm after in Linux?
Please note that I don't want to extract contents into a temp folder and delete them after execution because that might take longtime, specially for large contents. That will also double the storage space requirements for running the contents.
Thank you in advance.

klik (at least in the klik2 CMG format) used an zISO image, which can be mounted by the kernel or by a FUSE client, rather than a zip. You could use other filesystem types that are supported by the kernel or via FUSE. Maybe fuse-zip is worth a shot?
You could also modify the loader to read directly out of the bundle. For example, Android's Dalvik VM can load dex files directly from apk bundles, which are effectively zip files. (Native code on Android, however, still needs to be unpacked first, and does take more time and space. Modifying the native loader is… tricky.)

Related

How to share files between two (desktop) applications in a secure way

The problem is that I need to share files between 2 programs, but I don't want that those files are accessible by the user of the computer and other programs than these 2. So the flow of the files are like this: Program A (which I will code myself) recieves a file from the internet and puts somewhere on the computer. Then Program A calls Program B (which I didn't code and can't change). Program B reads the downloaded file and does some things with it and produces another file which Program B puts also somewhere on the computer. Then Program A reads that file and uploads it to the internet.
What I have found
I thought that maybe Windows Sandbox was interesting, but the problem with Windows Sandbox is that it's only available to windows 10 pro and windows 11, and that it is virtualised, and performance is quite important for Program B... So any virtualised software is not very usable, unless it is close to native performance.
For Linux, I found FreeBSD jails. But this seems more focussed on keeping the applications in the jail prohibited to access files outside the jail than to prohibit the programs outside the jail from reading and writing to files in the jail. So actually I need the opposite...
Another interesting concept was to keep the files stored in RAM like mmap in Linux, but since I can't change Program B, I don't know how to implement that. Is there some kind of container application that encapsulates the IO of Program B and redirects it to a file in RAM?
Does anyone have some suggestions? Thanks!
You can't really prevent the user/owner of the computer from reading the file if you are storing it on their disk. You can try to make it more difficult to access the content (which is what DRM does) but ultimately you the user can always bypass your controls given sufficient motivation and resources. Even if you store the files purely in RAM, a user with administrative permissions can dump your program's memory, and extract the files from there.

Check file history in Linux/Ubuntu/Centos

Can i check the file history like git or SVN in linux os. The modifications by date in Linux/Ubuntu/Centos. Any software that helps me do this?
Git and Subversion are software packages whose purpose in life is to keep track of content changes in the files of a project. The OSes usually do not care about files history; they don't provide such a feature.
Windows and macOS include backup tools that run automatically in background (if they are enabled) from time to time and can be used to access some (not all) past versions of the files. This functionality comes with the cost of disk space used to store the past versions of files.
Linux doesn't provide such a tool (but you can install one, if you need it.)
I guess you are out of luck. You cannot recover a previous version of the file but you can install a backup software to avoid reaching this situation in the future.
By default you can't. The filesystem simply stores the current state of the file, not its history (as 1615903 pointed out in the comments, there are some versioned filesystems that keep track of this kind of history, but they are largely unsupported in Linux - which means you probably aren't dealing with one, if you are, the filesystem documentation can guide you through the recovery of your file). It's possible that some forensics tool can at least attempt to recover some file's history but I'm not sure of that (and they will probably fail if the older file's sectors have been written on).
For the future, you can prepare in advance for similar problems by setting up some incremental backup (it can be done pretty easily with rsync), but it's still limited to the specific timeframes you set your script to run into.

Hard-coding paths in Linux

Coming from Windows background here.
Is it an acceptable practice for GUI Linux applications to store their data files (not user-specific) at hard-coded locations (e. g. /etc/myapp/stuff)? I couldn't find any syscalls that would return the preferred directory for app data. Is there a convention out there as to what goes where?
/opt/appname/stuff according to the Linux Filesystem Hierarchy Standard
Your distribution's packaging system likely provides ways to handle common installation paths. What distribution are you using?
Generally speaking, yes there is a convention. On most Linux systems, application configuration files are typically located at /etc/appname/. You'll want to consult the LSB (Linux Standard Base) and the Linux FHS (Filesystem Hierarchy Standard) for their respective recommendations.
Also, if you are targeting your application towards a specific Linux distro, then that distro vendor probably has their own specific recommendations as far as packaging and related-conventions are concerned. You'll want to look at your distro vendor's developer pages for more information.
Configuration files for processes with elevated privileges are generally stored in /etc. Data files for processes with elevated privileges (Web Server, Mail Server, Chat Server, etc.) are generally stored in /var. And that's where consistency ends. Some folks say you start with the location to store them (/etc|/var) then have an appname sub-folder for your app, then continue from there as necessary.
If you're not a system daemon with elevated privileges, your only consistent choice is a dot directory in the launching user's home directory. I think the Free Desktop Standards (XDG) specify ~/.config for per-user configuration, and ~/.cache for replaceable static and/or generated data you need to save.
Looking at my Home Directory, a few key dot directories I have are:
~/.cache
~/.config
~/.irssi
~/.maildir
~/.mozilla
~/.kde
~/.ssh
~/.vnc
[edit]
While not a syscall, the XDG specifications I reference are at http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
There are certain conventions.
System-wide, readable/editable (text-based) configuration files go in /etc/appname/.
System-wide, per-machine binary data files that change (eg. binary databases) go in /var/*/appname/ - /var/cache/appname/, /var/spool/appname/ and /var/lib/appname/ are the most common.
System-wide binary data files that could notionally be shared between machines (eg. things like graphics and sound files) go in /usr/share/appname/.
The full paths that Unix/Linux/GNU applications use to store config files and other data is usually set when an application is configured prior to compilation. These paths then get hard-coded into the compiled binary (you can see examples of this by running strings(1) over some existing executables).
That is, these types of paths are build-time configurable, not run-time configurable by default. Many apps will support command line options to specify where a configuration file is, and that configuration file will usually contain paths for other application resources. This allows an application to run with minimal configuration (built-in paths) but also allows a site to customise the paths completely.
Under Linux, only the basic services (opening a file, doing networking and interprocess communication etc) are provided as system calls. The rest is done using libraries.
If you are coding a GUI application, you should look into your toolkit's documentation to see if it provides a mechanism for managing defaults. Both KDE and Gnome have one for instance.

User-dependent file content

For some unfortunate reasons, I have to convert a proprietary and binary library from a one-user per workstation to a multi-user per workstation setup.
Current setup. A user uses a program linked against a library. This library reads a system wide configuration file (using an hard-coded path, ie /usr/local/thelib/main.conf ) which itself contains several paths to several working directories. The wdir are themselves containing a bunch of user data files.
Desired outcome. Being able to manage several users on the same workstation. Of course, a user shall not be able to read nor alter any other user's data through the library, which should be taken care of by unix rights if I manage to feed the library a different working directory for each user.
The library might be used by several users at the same time so ln-ing the configuration file in /usr/local at runtime is not an option.
I was thinking of using FUSE in order to provide a different content for the file /usr/local/thelib/main.conf, depending on an environnement variable or the current unix user. The environnement var would then be used as a switch inside the code producing the configuration file.
I'm confortable using Python, Perl or C.
The workstation is running an up-to-date GNU/Linux Debian or Ubuntu distribution with a pretty recent kernel.
So. What do you think :
would you use FUSE ?
would you produce another kind of wrapper - using chroot(2) was suggested below per janneb - ?
use something else allowed by Linux ?
I kinda know that I would be able to produce something functional but I'll get the community advice since I don't want to reinvent the wheel right now.
Thanks.
Florian
you could use LD_PRELOAD to load a small stub that intercepts open() calls, and opens ~/.main.conf (assuming this is a shared object). Then in your application startup routine, check that LD_PRELOAD is set to the correct value, and if not, restart the app with the correct environment.
A simple way would be for the app to call chroot() before calling the library init function(s). E.g. if you chroot into $HOME/theapp then each user can have a private own config file in $HOME/theapp/usr/local/thelib/main.conf as well as private working dirs somewhere under $HOME/theapp.

Find the files related to a software

I have one doubt. I am doing a project related to system restore concept in Linux. There i am planning to perform application wise rollback in case of failure. Is there any way to figure out what are all the files used by an application in the system?
Ok. I will make it a little clear. For instance consider the firefox application. When it is installed many files are written from the .deb file to folders like /etc, /usr, /opt etc. In windows all the files are installed in one folder under program files while in linux its not. So is there any way to figure out the files that belong to a software?
Thanks.
Well this can cover several things.
If you mean, which files are provided by the installation of your application ? Then the answer is, use decent package management, provide your software as an rpm/deb/... whatever package, and unstallation will take care of the rest.
If you mean, which libraries are being referenced by our application ? Then you can use ldd this will tell your which dynamic libraries are used when executing this application.
If you mean, which files is my application actively using ? Then take a look at the output of lsof (lsof = list open files) (or alternatively ls /proc//fd/), this will show all file descriptors open by your application (files, sockets, pipes, tty's, ...)
Or you could use all of the above.
One thing you can't track (unless you log this yourself) is which files have been created by your application during its lifetime.
To determine all the files installed along with the app depends on the package manager. All the ones I've dealt with (apt, pacman) have had this capability.
To determine all the files currently open by an application, use lsof.
Well, that depends ...
Most Linux system have some kind of packet management software, like aptitude in debian and ubuntu. There, you have information about what belongs to a packet. You might be able to use that information. That does not cover files created during runtime of apps though.
If you are using an RPM based distro
# rpm -Uvh --repackage pkg-1-1.i386.rpm
will repackage the old files and upgrade in a transaction so you can later rollback if something went wrong. To rollback to yesterday's state for example
# rpm -Uvh --rollback yesterday
See this article for other examples.

Resources