Is there a way how to effectively determine the number of accesses to a specific file and the process which accessed it without storing the access info by a 3rd party software? I'm looking for something built in inside the linux-based operating systems. The date of the last change is pretty obvious but I need information at least on how many times it was accessed since the creation of the file.
Can anyone shed some light on this file accessing information? Is it stored somewhere?
No, it is not stored. That would be a very odd feature.
You can monitor access to a file and count what you need yourself.
You can write your own program doing this with inotify. Here is a rather nice introduction.
Another option is using Linux audit subsystem. This way you'll set up rules telling the kernel which files are you interrested in, and later you'll be able to check logs to get whichever statistics you need. Here is a short tutorial.
Related
I'm writing a Linux app that has a time limited demo. There's not going to be a server that the app can phone home to, so I need to store data on the system in order to figure out if the demo has been started and how much time is remaining. The location of this data needs to be obfuscated so that a non-power user is unlikely to be able to find it, even though I'm aware Linux users tend to skew more towards power users than for other operating systems.
I already know the logistics of how to implement a time limited demo as long as I can store data secretly somewhere on the system, but I'm not sure how to do that last part. The requirements here are:
The data needs to be globally readable and writable so that any user account can access it and modify it (as the demo applies system-wide and not on a per-user basis)
Wherever the data is stored or whichever method I use needs to be available in RHEL, Fedora and Debian based distributions. Even better if it's basically guaranteed to be available in all distros.
Is there any way to accomplish this?
Well after doing quite a bit more research I've concluded that the only place to put that kind of data is /var/tmp. Not exactly secret or obfuscated, but there's no other place in the filesystem that's globally writable and isn't cleared out after rebooting the system.
What are the resources I need to go through to fully understand how this works? I've looked up online but all I got are software solutions rather than how the software actually detects them.
I want to be able to detect a malware that's in my computer. Let's say there's a trojan horse in my Computer. How would I write a program to detect it?
I'm a beginner at Information Security.
Thanks in advance.
Well most endpoint security products have:
- an on-demand scanning component.
- a real-time scanning component.
- hooks into other areas of the OS to inspect data before "released". E.g. Network layer for network born threats.
- a detection engine - includes file extractors
- detection data that can be updated.
- Run time scanning elements.
There are many layers and components that all need to work together to increase protection.
Here are a few scenarios that would need to be covered by a security product.
Malicious file on disk - static scanning - on-demand. Maybe the example you have.
A command line/on-demand scanner would enumerate each directory and file based on what was requested to be scanned. The process would read files and pass streams of data to a detection engine. Depending on the scanning settings configured and exclusions, files would be checked. The engine can understand/unpack the files to check types. Most have a file type detection component. It could just look at the first few bytes of a file as per this - https://linux.die.net/man/5/magic. Quite basic but it gives you an idea on how you can classify a file type before carrying out more classifications. It could be as simple as a few check-sums of a file at different offsets.
In the case of your example of a trojan file. Assuming you are your own virus lab, maybe you have seen the file before, analyzed it an written detection for it as you know it is malicious. Maybe you can just checksum part of the file and publish this data to you product. So you have virusdata.dat, in it you might have a checksum and a name for it. E.g.
123456789, Troj-1
You then have a scanning process, that loads your virus data file at startup and opens the file for scanning. You scanner checksums the file as per the lab scenario and you get a match with the data file. You display the name as it was labelled. This is the most basic example really and not that practical bu hopefully it serves some purpose. Of course you will see the problem with false positives.
Other aspects of a product include:
A process writing a malicious file to disk - real-time.
In order to "see" file access in real-time and get in that stack you would want a file system filter driver. A file system mini filter for example on Windows: https://msdn.microsoft.com/en-us/windows/hardware/drivers/ifs/file-system-minifilter-drivers. This will guarantee that you get access to the file before it's read/written. You can then scan the file before it's written or read by the process to give you a chance to deny access and alert. Note in this scenario you are blocking until you come to a decision whether to allow or block the access. It is for this reason that on-access security products can slow down file I/O. They typically have a number of scanning threads that a driver can pass work to. If all threads are busy scanning then you have a bit of an issue. You need to handle things like zip bombs, etc and bail out before tying up a scanning engine/CPU/Memory etc...
A browser downloading a malicious file.
You could reply on the on-access scanner preventing a file hitting the disk by the browser process but then the browsers can render scripts before hitting the file system. As a result you might want to create a component to intercept traffic before the web browser. There are a few possibilities here. Do you target specific browsers with plugins or do you go down a level and intercept the traffic with a local proxy process. Options include hooking the network layer with a Layered Service Provider (LSP) or WFP (https://msdn.microsoft.com/en-us/windows/hardware/drivers/network/windows-filtering-platform-callout-drivers2). Here you can redirect traffic to an in or out of process proxy to examine the traffic. SSL traffic poses an issue here unless you're going to crack open the pipe again more work.
Then there is run-time protection, where you don't detect a file with a signature but you apply rules to check behavior. For example a process that creates a start-up registry location for itself might be treated as suspicious. Maybe not enough to block the file on it's own but what if the file didn't have a valid signature, the location was the user's temp location. It's created by AutoIt and doesn't have a file version. All of these properties can give weight to the decision of if it should be run and these form the proprietary data of the security vendor and are constantly being refined. Maybe you start detecting applications as suspicious and give the user a warning so they can authorize them.
This is a massively huge topic that touches so many layers. Hopefully this is the sort of thing you had in mind.
Among the literature, "The Art of Computer Virus Research and Defense" from Peter Szor is definitely a "must read".
I am writing a small tracing mechanism for academic purposes. This program tracks another process using ptrace and I need to compare different way to access its memory to retrieve information such as system call arguments.
Can you tell me where I can find a comprehensive list or just tell which different mechanisms exist.
Thank you.
I am working on a similar project. You can try Vmtrace or PageTrace
they provide withe the pages accessed by each process
I am searching for a TrueCrypt alternative that has an API to programmatically access the files. Does anyone know a solution?
The API should support the listing, creating, changing and deleting of files.
Diskcryptor does not have an API, but it is GPL.
If I may, I beleive what you are asking for is for a abstract file system library. I understand that you want to load a TrueCrypt or similar container and list its content. When it is opened, such a container is just raw bytes reprenting sectors. On top the the encryption, such an API would see only raw sectors and it would have to make sense of them with a corresponding sector level api.
You can see the problem in another way. How would you write a program, such as zip, that can present such information on a zip file, a very common container if you will.
So the API you are looking for would need to acheive two things :
Understand the container's encryption scheme (possibly multiple version of it)
Understand the sector format of the embeeded filesystem
Provide a user friendly API.
I have asked myself the same questions a while ago, scoured the net for answers, and this answer is the sum of what I have found so far. I hope you find it a valid answer, even if its not actionable.
Not yet, anyways ;)
Our SolFS OS Edition might be what you are looking for if you plan to create new software. It's available for Windows, MacOS X, Linux and FreeBSD.
Java Filesystem Provider with integrated encryption : https://github.com/cryptomator/cryptofs
If i want to develop a registry-like System for Linux, which Windows Registry design failures should i avoid?
Which features would be absolutely necessary?
What are the main concerns (security, ease-of-configuration, ...)?
I think the Windows Registry was not a bad idea, just the implementation didn't fullfill the promises. A common place for configurations including for example apache config, database config or mail server config wouldn't be a bad idea and might improve maintainability, especially if it has options for (protected) remote access.
I once worked on a kernel based solution but stopped because others said that registries are useless (because the windows registry is)... what do you think?
I once worked on a kernel based solution but stopped because others said that registries are useless (because the windows registry is)... what do you think?
A kernel-based registry? Why? Why? A thousand times, why? Might as well ask for a kernel-based musical postcard or inetd for all the point it is putting it in there. If it doesn't need to be in the kernel, it shouldn't be in. There are many other ways to implement a privileged process that don't require deep hackery like that...
If i want to develop a registry-like System for Linux, which Windows Registry design failures should i avoid?
Make sure that applications can change many entries at once in an atomic fashion.
Make sure that there are simple command-line tools to manipulate it.
Make sure that no critical part of the system needs it, so that it's always possible to boot to a point where you can fix things.
Make sure that backup programs back it up correctly!
Don't let chunks of executable data be stored in your registry.
If you must have a single repository, at least use a proper database so you have tools to restore, backup, recover it etc and you can interact with it without having a new set of custom APIs
the first one that come to my mind is somehow you need to avoid orphan registry entries. At the moment when you delete program you are also deleting the configuration files which are under some directory but after having a registry system you need to make sure when a program is deleted its configuration in registry should be deleted as well.
IMHO, the main problems with the windows registry are:
Binary format. This loses you the availability of a huge variety of very useful tools. In a binary format, tools like diff, search, version control etc. have to be specially implemented, rather than use the best of breed which are capable of operating on the common substrate of text. Text also offers the advantage of trivially embedded documentation / comments (also greppable), and easy programatic creation and parsing by external tools. It's also more flexible - sometimes configuration is better expressed with a full turing complete language than trying to shoehorn it into a structure of keys and subkeys.
Monolithic. It's a big advantage to have everything for application X contained in one place. Move to a new computer and want to keep your settings for it? Just copy the file. While this is theoretically possible with the registry, so long as everything is under a single key, in practice it's a non-starter. Settings tend to be diffused in various places, and it is generally difficult to find where. This is usually given as a strength of the registry, but "everything in one place" generally devolves to "Everything put somewhere in one huge place".
Too broad. Its easy to think of it as just a place for user settings, but in fact the registry becomes a dumping ground for everything. 90% of what's there is not designed for users to read or modify, but is in fact a database of the serialised form of various structures used by programs that want to persist information. This includes things like the entire COM registration system, installed apps, etc. Now this is stuff that needs to be stored, but the fact that its mixed in with things like user-configurable settings and stuff you might want to read dramatically lowers its value.