I'm studying security, and I would like to know: in Windows or Unix based OS environment, is there a way for anything (programs or user with some knowledge) to copy all the content of the computer's memory?
My worry is about a hacker get my decrypted data loaded in memory. And how to avoid it.
The hacker may be the user himself.
On Windows you can generate a crash dump that will contain nearly all memory (if not all memory) if you configure the system to generate a "Complete memory dump":
http://support.microsoft.com/kb/254649
Then you just need to cause a bugcheck.
The nice thing about dealing with a crash dump file is that the Debugging Tools for Windows (and other tools) know how to parse a lot of information out of the files.
Generaly, if you are privileged user, you can access all memory you want.
if you have linux, you can login as root and dump kernel memory using
cat /proc/kcore.
A device driver or the OS itself could copy all memory. An suitably-privileged person could copy the portion of system memory that is paged or swapped into page files (on many operating systems, anyway). A privileged person could copy system memory dumps.
Is this what you're asking? If not, then you may want to give more detail. In particular, can you narrow down what kinds of operating system you're asking about? In school, you could be asking about really old ones, and the answers will be different.
Related
I wondered why we need to switch to kernel space when we want to access a hardware device. I understand that sometimes, for specific actions such as memory allocation, we need to make system calls in order to switch from user space to kernel space because the operating system needs to organize everything and make a separation between processes and how they use memory and others. But why we can't directly access a hardware device ?
There is no problem in writing your own driver to access the hardware from User Space and plenty of documentation is available. For example, this tutorial at xatlantis seems to be recent and good source.
The reason it has been designed like that is because mainly due to security reasons .Most systems I know about specifically do not allow user programs to do I/O or to access kernel space memory. Such things would lead to wildly insecure systems, because with access to the kernel a user program could change permissions and get access to any data anywhere in the system, and presumably change it.
References:
XATLANTIS
STACKEXCHANGE
A device-driver may choose to provide access from user processes to device registers, device memory, or both. A common method is a device-specific service connected with an mmap() request. Consider a frame-buffer's on-board memory, and efficiency from a user process being able to r/w that space directly. For devices in general, notably there are security considerations and drivers that provide direct access often set limits to processes with sufficient credentials. Files within /dev are usually set with owner/group access permissions similarly limited.
I apologize in advance for the lack of precision in my phrasing/terminology...I'm not a system programmer by any means...
This is a security-related programming question...at work, I've been asked to assess the "risk" to a PCIe add-in card depending on the integrity of the host operating-system (specifically, Windows Server 2012 x64, and Redhat Enterprise 6/7 x86-64.)
So my question is this:
We have a PCIe-peripheral (add-in board) that contains several embedded processors that will handle sensitive data. The preferred solution would be to encrypt the data before it enters the PCIe-bus, and decrypt it after it leaves the PCIe-bus...but we can't do this for a variety of reasons (performance, cost, etc.) Instead, we'll be passing data in cleartext form over the PCIe-bus.
Let's assume an attacker has network access to the machine, but not physical access. If a vendor's PCIe-endpoint device is installed in a server, and the vendor's (signed) driver is up and running with the associated hardware, is it possible for a malicious process/thread to access (read/write) the PCI memory-mapped space(s) of the PCIe-endpoint?
I know there are utilities that allow me to dump (read) the pci config space of all endpoints in a pcie hierarchy...but I have no idea if that extends to reading and writing inside the memory-mapped windows of the installed endpoints (especially if the endpoint is already associated with a device-driver.)
Also, if this is possible, how difficult is it?
Are we talking a user-space program being able to do this, or does it require the attacker to have root/admin-access to the machine (to run a program of his design, or install a fake/proxy driver.)?
Also, does virtualization make a difference?
Accessing device memory requires operating in a lower protection ring than userland software, also known as kernel mode. The only way to access it is going through a driver or the kernel.
In Windows for instance, and all operating systems, file priveleges exist that "prevent" a file from being written to if that rule is set.
This is hard to describe but please listen. People coding in a C language obviously would use some form of framework to easily modify a file. Using the built-in .Net framework, Microsoft obviously would put prevention into their classes checking file permissions before writing to a file. Since file permissions are stored via software and not hardware, what really prevents a file from being tampered with?
Let's hop over to Assembly. Suppose I create an Assembly program that directly accesses hard drive data and changes the bytes of a file. How could file permissions possibly prevent me from doing this? I guess what I am trying to ask is how a file permission really stays secure if the compiled program does not check for file permissions before writing to a file?
Suppose I create an Assembly program that directly accesses hard drive data and changes the bytes of a file. How could file permissions possibly prevent me from doing this?
If you write in assembly, your assembly is still run in a CPU mode that prevents direct access to memory and devices.
CPU modes … place restrictions on the type and scope of operations that can be performed by certain processes being run by the CPU. This design allows the operating system to run with more privileges than application software.
Your code still needs to issue system calls to get the OS to interact with memory not owned by your process and devices.
a system call is how a program requests a service from an operating system's kernel. This may include hardware related services (e.g. accessing the hard disk), creating and executing new processes, and communicating with integral kernel services (like scheduling).
The OS maintains security by monopolizing the ability to switch CPU modes and by crafting system calls so that they are safe for user-land code to initiate.
I'm working on one piece of a very high performance piece of hardware that works under Linux. We'd like to cache some data but we're worried about memory consumption - so the idea is to create a user process to manage the cache. That way, the cache can be in virtual memory, not in kernel space, et cetera.
The question is: what's the best way to do this? My first instinct is to have the kernel module create a character device file, and have a user program that opens that file, then sits on a select statement waiting for commands to arrive on it. But I'm concerned that this might not be optimal. A friend mentioned he knew of a socket-based interface, but when pressed he couldn't provide any details....
Any suggestions?
I think you're looking for the netlink interface. See Why and How to Use Netlink Socket [sic] for more information. Be careful of security issues when talking between the kernel and user space; there was a recent vulnerability when udev neglected to check that messages were coming from the kernel rather than user space.
We implemented a server application available on Windows only. Now we like to port it to Linux, HP-UX and AIX, too. This application provides internal statistics through performance counters into the Windows Performance Monitor.
To be more precise: The application is a data base, and we like to provide information like number of connected users or number of requests executed to the administrator. So these are "new" information, proprietary to our application. But we like to make them available in the same environment where the operating system delivers information like the CPU, etc. The goal is to make them easily readable for the administrator.
What is the appropriate and commonly used performance monitor under Linux, HP-UX and AIX?
I would say: that depends on which performance you want to monitor. Used CPU time? Free RAM? Disk IO? Number of beers in your freezer...
But regardless of this you can look at any files below /proc. I'm not sure for HP, but at least Linux and AIX should have that tree (if it's not deactivated at kernel compile time).
Management is where most OSes depart from one another. For this reason there are not many tools that are common between all the OSes.
Additionally, Unix tools follow the single process single responsibility idiom where one tool gets cpu info, another gets memory etc.
The only tool i have seen in the Unix world that gets all this info in one place is top. Almost all sys admins are familiar with this tool and works on all the flavors of OSes you are interested in. It also has the additional advantage of being open source. You could simply extend this tool to expose the counters you are interested in and ship it along with your application.
Another way to do this might be to expose your counters through SNMP and leave it to some third party SNMP tool like HP open view that can collect and present a consistent view along with other management info. This might be a more enterprisy solution, which might appeal to the marketing folks.
I would also say its a good idea to write a standalone console tool that admins can use from their custom home grown scripts (there are many firsm out there with super human admins / over paid it staff that does this).
All together would be a healthy solution for your requirement i think.
The most standard unix tools for such data are the *stat (iostat, vmstat, netstat) tools and sar. On Linux you'll find all this information in /proc, but most Unixes don't have /proc nicely filled with what you are looking for. The mentioned tools are quite standardized and can be used to gather the data you need.