How to make /proc-like files - linux

I want to make a service, that runs like a daemon. It runs in the background and downloads things here and there.
It isn't much that I want to present to the user, just some statistics of docs downloaded in the day/week and how many errors occurred, etc...
So I wanted to make just a "virtual file" where the information is displayed. Like the files in GNU/Linux in the /proc directory.
My question is: what is the convention to do this? Where can I put this file (/proc is just for the linux kernel stats). Can I somehow create a "virtual file" (like a socket or so) or is it better to write a real file? Or is this idea stupid?

Related

How to share files between two (desktop) applications in a secure way

The problem is that I need to share files between 2 programs, but I don't want that those files are accessible by the user of the computer and other programs than these 2. So the flow of the files are like this: Program A (which I will code myself) recieves a file from the internet and puts somewhere on the computer. Then Program A calls Program B (which I didn't code and can't change). Program B reads the downloaded file and does some things with it and produces another file which Program B puts also somewhere on the computer. Then Program A reads that file and uploads it to the internet.
What I have found
I thought that maybe Windows Sandbox was interesting, but the problem with Windows Sandbox is that it's only available to windows 10 pro and windows 11, and that it is virtualised, and performance is quite important for Program B... So any virtualised software is not very usable, unless it is close to native performance.
For Linux, I found FreeBSD jails. But this seems more focussed on keeping the applications in the jail prohibited to access files outside the jail than to prohibit the programs outside the jail from reading and writing to files in the jail. So actually I need the opposite...
Another interesting concept was to keep the files stored in RAM like mmap in Linux, but since I can't change Program B, I don't know how to implement that. Is there some kind of container application that encapsulates the IO of Program B and redirects it to a file in RAM?
Does anyone have some suggestions? Thanks!
You can't really prevent the user/owner of the computer from reading the file if you are storing it on their disk. You can try to make it more difficult to access the content (which is what DRM does) but ultimately you the user can always bypass your controls given sufficient motivation and resources. Even if you store the files purely in RAM, a user with administrative permissions can dump your program's memory, and extract the files from there.

How to "safely" allow others to work on my server?

I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.

Hooking into Windows File System and Insert Virtual File System

I'm working on an application similar to a program called Mod Organizer. Essentially what the program does is let people download and install mods to the game, Skyrim. However, Mod Organizer does something interesting; rather than install the mods directly to the game's data directory (like other mod managers), MO installs each mod to its own directory in some other arbitrary location and then loads all the mods together once the game launches. This is important because it makes mod managing much less of a hassle.
My question is: how might I create this on the fly file system or make Windows "pretend" a directory full of mod files is somewhere else.
At first I thought of creating symlinks with my code, but This guide put me onto the trail of "hooking," and specifically recommended trying EasyHook. While I think can understand the underlying concept of hooking (essentially intercepting signals from the OS and redirecting them for whatever purpose), I don't really know how to make the hook actually redirect files.
If anyone knows a good resource for this kind of hooking or has better approach to my problem, I'd appreciate the help.
What you have described in done with a filesystem filter driver. This driver intercepts requests to the filesystem and inserts additional information, such as tells the system about the files and directories which don't really exist on the disk. If the files are preset somewhere on the disk, the request can be simply redirected to the existing file or directory.
Filesystem filter driver is a kernel-mode driver, not easy to implement. You can use a pre-created driver that lets you perform tasks in user mode API, eg. our CBFS Filter.

What is the proper place to put named pipes on Linux?

I've got a few processes that talk to each other through named pipes. Currently, I'm creating all my pipes locally, and keeping the applications in the same working directory. At some point, it's assumed that these programs can (and will) be run from different directories. I need to create these pipes I'm using in a known location, so all of the different applications will be able to find the pipes they need.
I'm new to working on Linux and am not familiar with the filesystem structure. In Windows, I'd use something like the AppData folder to keep these pipes. I'm not sure what the equivalent is in Linux.
The /tmp directory looks like it probably could function just nicely. I've read in a few places that it's cleared on system shutdowns (and that's fine, I have no probably re-creating the pipes when I start back up.) but I've seen a few other people say they're losing files while the system is up, as if it's cleaned periodically, which I don't want to happen while my applications are using those pipes!
Is there a place more suited for application specific stores? Or would /tmp be the place that I'd want to keep these (since they are after all, temporary.)?
I've seen SaltStack using /var/run. The only problem is that you need root access to write into that directory, but let's say that you are going to run your process as a system daemon. SaltStack creates /var/run/salt at the installation time and changes the owner to salt so that later on it can be used without root privileges.
I also checked the Filesystem Hierarchy Standard and even though it's not really important so much, even they say:
System programs that maintain transient UNIX-domain sockets must place them in this directory.
Since named pipes are something very similar, I would go the same way.
On newer Linux distros with systemd /run/user/<userid> (created by pam_systemd during login if it doesn't already exist) can be used for opening up sockets and putting .pid files there instead of /var/run where only root has access. Also note that /var/run is a symlink to /run so /var/run/user/<userid> can also be used. For more infos check out this thread. The idea is that system daemons should have a /var/run/<daemon name>/ directory created during installation with proper permissions and put their sockets/pid files in there while daemons run by the user (such as pulseaudio) should use /run/user/<userid>/. Another option is /tmp and /var/tmp.

On Linux,File created by which program?

I have the same questions as this post
Only that my question is on the Linux platform
I have a directory in my folder
and I don't know which program has created it
Is it possible to know ?
Thanks
Same answer applies, unless the file itself has metadata like some .doc files and such that contains the information you cannot know what created the file (unless you create a kernel module to intercept block requests to create new files and check what application submitted the request but that is probably not what you want to do).
The answer is the same as in the previous question -- generally, no.
However, you can look at the owner and group of this directory; if the program that creates it is a daemon (service) process, it might be running under its own user / group and thus the files / directories created might have those ownerships.
What does this say?
ls -l /path/to/the/directory
The answer is the same as the one for question you linked. Linux doesn't store information about the creator of the file as far as which program did it. But, like the other answer said, you could create a monitor and record that information yourself.

Resources