How to share files between two (desktop) applications in a secure way - linux

The problem is that I need to share files between 2 programs, but I don't want that those files are accessible by the user of the computer and other programs than these 2. So the flow of the files are like this: Program A (which I will code myself) recieves a file from the internet and puts somewhere on the computer. Then Program A calls Program B (which I didn't code and can't change). Program B reads the downloaded file and does some things with it and produces another file which Program B puts also somewhere on the computer. Then Program A reads that file and uploads it to the internet.
What I have found
I thought that maybe Windows Sandbox was interesting, but the problem with Windows Sandbox is that it's only available to windows 10 pro and windows 11, and that it is virtualised, and performance is quite important for Program B... So any virtualised software is not very usable, unless it is close to native performance.
For Linux, I found FreeBSD jails. But this seems more focussed on keeping the applications in the jail prohibited to access files outside the jail than to prohibit the programs outside the jail from reading and writing to files in the jail. So actually I need the opposite...
Another interesting concept was to keep the files stored in RAM like mmap in Linux, but since I can't change Program B, I don't know how to implement that. Is there some kind of container application that encapsulates the IO of Program B and redirects it to a file in RAM?
Does anyone have some suggestions? Thanks!

You can't really prevent the user/owner of the computer from reading the file if you are storing it on their disk. You can try to make it more difficult to access the content (which is what DRM does) but ultimately you the user can always bypass your controls given sufficient motivation and resources. Even if you store the files purely in RAM, a user with administrative permissions can dump your program's memory, and extract the files from there.

Related

RDweb: Cannot run legacy VB6 DLL more than once, at the same time

We have a VB6 program installed on all of our clients' local C drives, along with an associated VB6 DLL program. The program was written back before my time in the 90s. It was not designed to run off a server or to allow multiple user access to the same EXE at the same time, hence why it's on everyone's C drive. However, all running sessions of it refer to the same database source on a separate SQL Server via ODBC. The database connectivity works fine.
Ok that's all history, with everyone working remotely (Covid19)!
Today however, our clients are all remoting into a virtual server via RD Web. We want them to avoid using our VPN. We have TWO virtual servers allocated to RDweb users: TS01 and TS02, and license for up to 64 users. Every user is automatically allocated one of the two servers. If two people log in at the same time, and one in TS01 and the other in TS02 - everything is fine! It's when a 3rd person logs in and is given either of the servers, and runs the program, is when it crashes, with this error:
The DLL is registered in both Computer\HKEY_CLASSES_ROOT\ and Computer\HKEY_LOCAL_MACHINE\SOFTWARE\, but not LOCAL_USER, which I think is necessary to make this be a multi-user program, within a server environment.
Converting the app is not an option, as we don't have VB6 compilers. Do we need to wrap the DLL in "something"?
Any ideas how to get this legacy program to run for multiple users, are appreciated.
Thanks
Try installing/copying VB program and related DLLs in each users folders (like home folder and shortcuts pointing to these HOME directories). If the program runs, it should update the database in the same way. Sometimes, most workarounds are simple. If they need different locked DLL working space then give them that (May have memory issues later)
Please see this
https://stackoverflow.com/a/345154/12011019
and
https://learn.microsoft.com/en-us/archive/msdn-magazine/2005/april/simplify-app-deployment-with-clickonce-and-registration-free-com
Some DLLs are not designed to be shared and this behaviour cannot be modified without reprogramming. There are in process and out process (threads ) DLLs. Or there can be many other issues. If its not working, its not allowed by design.
https://support.microsoft.com/en-au/help/911359/a-client-application-may-intermittently-receive-an-error-message-when
The shared DLLs that are used system wide do not have this limitation as many they are designed to be used by many applications.
Please try and comment the behaviour.

Hooking into Windows File System and Insert Virtual File System

I'm working on an application similar to a program called Mod Organizer. Essentially what the program does is let people download and install mods to the game, Skyrim. However, Mod Organizer does something interesting; rather than install the mods directly to the game's data directory (like other mod managers), MO installs each mod to its own directory in some other arbitrary location and then loads all the mods together once the game launches. This is important because it makes mod managing much less of a hassle.
My question is: how might I create this on the fly file system or make Windows "pretend" a directory full of mod files is somewhere else.
At first I thought of creating symlinks with my code, but This guide put me onto the trail of "hooking," and specifically recommended trying EasyHook. While I think can understand the underlying concept of hooking (essentially intercepting signals from the OS and redirecting them for whatever purpose), I don't really know how to make the hook actually redirect files.
If anyone knows a good resource for this kind of hooking or has better approach to my problem, I'd appreciate the help.
What you have described in done with a filesystem filter driver. This driver intercepts requests to the filesystem and inserts additional information, such as tells the system about the files and directories which don't really exist on the disk. If the files are preset somewhere on the disk, the request can be simply redirected to the existing file or directory.
Filesystem filter driver is a kernel-mode driver, not easy to implement. You can use a pre-created driver that lets you perform tasks in user mode API, eg. our CBFS Filter.

NodeJS reading/writing files to network drive

I have a script that writes files to disk using fs createWriteStream.
What I am trying to achieve now is write those files to a shared network drive.
With a directory like so - //hostname/scratch/reece
I am running this script on windows, but this application will sit on ubnutu/rhel when I deploy it.
This is a crucial part of this script so any suggestions on how I can write to a network drive would be great.
The same would go for reading from a network drive and sending that back via HTTP.
Keeping in mind there would likely be hundreds of thousands of requests to write to this drive through my nodejs api, so I would like to avoid relying on background processes to handle the file transfer.
Any ideas on approach?
You will have to connect the drive to your server using a technology appropriate for that particular OS (may be different on Ubuntu vs. Windows). You can then address that server through whatever OS mount tech it uses.
In Windows, you can use either a drive letter or a UNC path. On Ubuntu, perhaps a mounted network file system volume.
This is one case where you aren't likely to make the exact same setup work on Windows vs. Ubuntu. If you put the appropriate root path name into a config file, the rest of your code can probably be identical. Beyond this, it isn't clear what you're asking.

Win32API/Win drivers: How to detect if file is accessed

I would like to create something like "file honeypot" on Windows OS.
The problem I would like to answer is this:
I need to detect that file is accessed (Malware wants to read file to send it over internet) so I can react to it. But I do not know how exacly tackle this thing.
I can periodically test file - Do not like this sollution. Would like some event driven without need to bother processor every few ms. But could work if file is huge enought so it cannot be read between checks.
I could exclusively open file myselve and somehow detect if file is accessed. But I have no idea how to do this thing.
Any idea about how to resolve this issue effectively? Maybe creating specialized driver could help but I have little experience in this.
Thanks
Tracking (and possibly preventing) filesystem access on Windows is accomplished using filesystem filter drivers. But you must be aware that kernel-mode code (rootkits etc) can bypass the filter driver stack and send the request directly to the filesystem. In this case only the filesystem driver itself can log or intercept access.
I'm going to assume that what you're writing is a relatively simple honeypot. The integrity of the system on which you're running has not been compromised, there is no rootkit or filter driver installation by malware and there is no process running that can implement avoidance or anti-avoidance measures.
The most likely scenario I can think of is that a server process running on the computer is subject to some kind of external control which would allow files containing sensitive data to be read remotely. It could be a web server, a mail server, an FTP server or something else but I assume nothing else on the computer has been compromised. And the task at hand is to watch particular files and see if anything is reading them.
With these assumptions a file system watcher will not help. It can monitor parts of the system for the creation of new files or modification or deletion of existing ones, but as far as I know it cannot monitor for read only access.
The only event-driven mechanism I am aware of is a filter driver. This is a specialised piece of driver software that can be inserted into the driver chain and monitor access to files. With the constraints above, it is a reliable solution to the problem at the cost of being quite hard to write.
If a polling mechanism is sufficient then I can see two avenues. One is to try to lock the file exclusively, which will fail if it is open. This is easy, but slow.
The other is to monitor the open file handles. I know it can be done because I know programs that do it, but I can't tell you how without some research.
If my assumptions are wrong, please edit your question and provide additional information.

What is the proper place to put named pipes on Linux?

I've got a few processes that talk to each other through named pipes. Currently, I'm creating all my pipes locally, and keeping the applications in the same working directory. At some point, it's assumed that these programs can (and will) be run from different directories. I need to create these pipes I'm using in a known location, so all of the different applications will be able to find the pipes they need.
I'm new to working on Linux and am not familiar with the filesystem structure. In Windows, I'd use something like the AppData folder to keep these pipes. I'm not sure what the equivalent is in Linux.
The /tmp directory looks like it probably could function just nicely. I've read in a few places that it's cleared on system shutdowns (and that's fine, I have no probably re-creating the pipes when I start back up.) but I've seen a few other people say they're losing files while the system is up, as if it's cleaned periodically, which I don't want to happen while my applications are using those pipes!
Is there a place more suited for application specific stores? Or would /tmp be the place that I'd want to keep these (since they are after all, temporary.)?
I've seen SaltStack using /var/run. The only problem is that you need root access to write into that directory, but let's say that you are going to run your process as a system daemon. SaltStack creates /var/run/salt at the installation time and changes the owner to salt so that later on it can be used without root privileges.
I also checked the Filesystem Hierarchy Standard and even though it's not really important so much, even they say:
System programs that maintain transient UNIX-domain sockets must place them in this directory.
Since named pipes are something very similar, I would go the same way.
On newer Linux distros with systemd /run/user/<userid> (created by pam_systemd during login if it doesn't already exist) can be used for opening up sockets and putting .pid files there instead of /var/run where only root has access. Also note that /var/run is a symlink to /run so /var/run/user/<userid> can also be used. For more infos check out this thread. The idea is that system daemons should have a /var/run/<daemon name>/ directory created during installation with proper permissions and put their sockets/pid files in there while daemons run by the user (such as pulseaudio) should use /run/user/<userid>/. Another option is /tmp and /var/tmp.

Resources