I'm working on a Java password manager and I currently have all of the user's data, after being decrypted from a file, sitting around in memory at all times and stored plainly as a String for displaying in the UI etc.
Is this a security risk in any way? I'm particularly concerned with someone "dumping" or reading the computer's memory in some way and finding a user's naked data.
I've considered keeping all sensitive pieces of data (the passwords) encrypted and only decrypting each piece as needed and destroying thereafter... but I'd rather not go through and change a lot of code on a superstition.
If your adversary has the ability to run arbitrary code on your target machine (with the debug privileges required to dump a process image), you are all sorts of screwed.
If your adversary has the ability to read memory at a distance accurately (ie. TEMPEST), you are all sorts of screwed.
Protect the data in transit and in storage (on the wire and on the disk), but don't worry* about data in memory.
*Ok, there are classes of programs that DO need to worry. 99.99% of all applications don't, I'm betting yours doesn't.
It is worth noting that the OS might decide to swap memory to disk, where it might remain for quite a while. Of course, reading the swap file requires strong priviledges, but who knows? The user's laptop might get stolen ...
Yes it certainly is, especially since you quite trivially can debug an application. Most code dealing with encryption and unsafe data use char arrays instead of strings. By using char arrays, you can overwrite the memory with sensitive details, limiting the lifetime of the sensitive data.
In theory, you cannot protect anything in memory completely. Some group out there managed to deep freeze the memory chips and read their contents 4 hours after the computer was turned off. Even without going to such lengths, a debugger and a breakpoint at just the right time will do the trick.
Practically though, just don't hold the plaintext in memory for longer than absolutely necessary. A determined enough attacker will get to it, but oh well.
Related
I felt very frustrated in the battle with cheaters of my game. I found a lot of hackers tampered my game data to avoid the anti-cheat system. I have tried some methods to verify if the game data has tampered or not. Such as encrypting my asset-package or check the hash of the package header.
However, I got stuck on the issue that my asset-package is huge. It is almost 1~3GB. I know the digital signature is doing very well in verifying data. But I need this to be done in almost real-time.
It seems I have to make a trade-off between verifying the whole file and the performance. Does there any way to verify a huge file in a short-time?
AES-NI based hashing such as Meow Hash can easily reach 16 bytes per cycle on a single thread, that is, for data already on-cache, it process tens of gigabytes of input in a second. Obviously in reality the memory and disk I/O speed becomes the limiting factor, but they apply on any method, so you can think of them as the upper limit. Since it's not designed for security, it's also possible for cheaters to quickly figure out a viable collision.
But, even if you figure out a sweet spot between speed and security, you're still relying on cheaters not forwarding your file/memory I/O. Additionally, it's still possible for the cheaters to just NOP any asset verification call. Since you care about cheaters, I'd assume this is an online game. The more common practice is to rearchitect the game to prevent cheating even with a broken asset. On Valorant, they move the line of sight calculation to server-side. LoL add kernel driver
The recent leak from Wikileaks has the CIA doing the following:
DO explicitly remove sensitive data (encryption keys, raw collection
data, shellcode, uploaded modules, etc) from memory as soon as the
data is no longer needed in plain-text form.
DO NOT RELY ON THE OPERATING SYSTEM TO DO THIS UPON TERMINATION OF
EXECUTION.
Me being a developer in the *nix world; I'm seeing this as merely changing the value of a variable (ensuring I do not pass by value; and instead by reference); so if it's a string thats 100 characters; writing 0's thats 101 characters. Is it really this simple? If not, why and what should be done instead?
Note: There are similar question that asked this; but it's in the C# and Windows world. So, I do not consider this question a duplicate.
Me being a developer in the *nix world; I'm seeing this as merely
changing the value of a variable (ensuring I do not pass by value; and
instead by reference); so if it's a string thats 100 characters;
writing 0's thats 101 characters. Is it really this simple? If not,
why and what should be done instead?
It should be this simple. The devil is in the details.
memory allocation functions, such as realloc, are not guaranteed to leave memory alone (you should not rely on their doing it one way or the other - see also this question). If you allocate 1K of memory, then realloc it to 10K, your original K might still be there somewhere else, containing its sensitive payload. It might then be allocated by another insensitive variable or buffer, or not, and through the new variable, it might be possible to access a part or all of the old content, much as it happened with slack space on some filesystems.
manually zeroing memory (and, with most compilers, bzero and memset count as manual loops) might be blithely optimized out, especially if you're zeroing a local variable ("bug" - actually a feature, with workaround).
some functions might leave "traces" in local buffers or in memory they allocate and deallocate.
in some languages and frameworks, whole portions of data could end up being moved around (e.g. during so-called "garbage collection", as noticed by #gene). You may be able to tell the GC not to process your sensitive area or otherwise "pin" it to that effect, and if so, must do so. Otherwise, data might end up in multiple, partial copies.
information might have come through and left traces you're not aware of (trivial example: a password sent through the network might linger in the network library read buffer).
live memory might be swapped out to disk.
Example of realloc doing its thing. Memory gets partly rewritten, and with some libraries this will only "work" if "a" is not the only allocated area (so you need to also declare c and allocate something immediately after a, so that a is not the last object and left free to grow):
int main() {
char *a;
char *b;
a = malloc(1024);
strcpy(a, "Hello");
strcpy(a + 200, "world");
printf("a at %08ld is %s...%s\n", a, a, a + 200);
b = realloc(a, 10240);
strcpy(b, "Hey!");
printf("a at %08ld is %s...%s, b at %08ld is %s\n", a, a, a + 200, b, b);
return 0;
}
Output:
a at 19828752 is Hello...world
a at 19828752 is 8????...world, b at 19830832 is Hey!
So the memory at address a was partly rewritten - "Hello" is lost, "world" is still there (as well as at b + 200).
So you need to handle reallocations of sensitive areas yourself; better yet, pre-allocate it all at program startup. Then, tell the OS that a sensitive area of memory must never be swapped to disk. Then you need to zero it in such a way that the compiler can't interfere. And you need to use a low-level enough language that you're sure doesn't do things by itself: a simple string concatenation could spawn two or three copies of the data - I'm fairly certain it happened in PHP 5.2.
Ages ago I wrote myself a small library - there wasn't valgrind yet - inspired by Steve Maguire's Writing Solid Code, and apart from overriding the various memory and string functions, I ended up overwriting memory and then calculating the checksum of the overwritten buffer. This not for security, I used it to track buffer over/under flows, double frees, use of freed memory -- this kind of things.
And then you need to ensure your failsafes work - for example, what happens if the program aborts? Might it be possible to make it abort on purpose?
You need to implement defense in depth, and always look at ways to keep as little information around as possible - for example clearing the intermediate buffers during a calculation rather than waiting and freeing the whole lot in one fell swoop at the very end, or just when exiting the program; keeping hashes instead of passwords when at all possible; and so on.
Of course all this depends on how sensitive the information is and what the attack surface is likely to be (mandatory xkcd reference: here). Rebooting the PC with a memtest86 image could be a viable alternative. Think of a dual-boot computer with memtest86 set to test memory and power down the PC as default boot option. When you want to turn off the system... you reboot it instead. The PC will reboot, enter memtest86 by default, and before powering off for good, it'll start filling all available RAM with marching troops of zeros and ones. Good luck freeze-booting information from that.
Zeroing out secrets (passwords, keys, etc) immediately after you are done with them is fairly standard practice. The difficulty is in dealing with language and platform features that can get in your way.
For example, C++ compilers can optimize out calls to memset if it determines that the data is not read after the write. Or operating systems may have paged the memory out to disk, potentially leaving the data available that way.
I am trying to measure the effects of adding memory to a LAMP server.
How can I find which processes try to read from the Linux buffer cache, but miss and read from disk instead?
SystemTap is one of the best ways to do this, but fair warning it's difficult to get a great answer. The kernel simply doesn't provide this data directly. You have to infer it based on how many times the system requested a read and how many times a disk was read from. Usually they line up fairly well and you can attribute the difference to the VFS cache, but not always. One problem is LVM- LVM is a "block device", but so is the underlying disk(s), so if you're not careful it's easy to double-count the disk reads.
A while back I took a stab at it and wrote this:
https://sourceware.org/systemtap/wiki/WSCacheHitRate
I do not claim that it is perfect, but it works better than nothing, and usually generates reasonable output as long as the environment is fairly "normal". It does attempt to account for LVM in a fairly crude way.
I'm creating a web application running on a Linux server. The application is constantly accessing a 250K file - it loads it in memory, reads it and sends back some info to the user. Since this file is read all the time, my client is suggesting to use something like memcache to cache it to memory, presumably because it will make read operations faster.
However, I'm thinking that the Linux filesystem is probably already caching the file in memory since it's accessed frequently. Is that right? In your opinion, would memcache provide a real improvement? Or is it going to do the same thing that Linux is already doing?
I'm not really familiar with neither Linux nor memcache, so I would really appreciate if someone could clarify this.
Yes, if you do not modify the file each time you open it.
Linux will hold the file's information in copy-on-write pages in memory, and "loading" the file into memory should be very fast (page table swap at worst).
Edit: Though, as cdhowie points out, there is no 'linux filesystem'. However, I believe the relevant code is in linux's memory management, and is therefore independent of the filesystem in question. If you're curious, you can read in the linux source about handling vm_area_struct objects in linux/mm/mmap.c, mainly.
As people have mentioned, mmap is a good solution here.
But, one 250k file is very small. You might want to read it in and put it in some sort of memory structure that matches what you want to send back to the user on startup. Ie, if it is a text file an array of lines might be a good choice, etc.
The file should be cached, but make sure the noatime option is set on the mount, otherwise the access time will attempt to be saved to the file, invalidating the cache.
Yes, definitely. It will keep accessed files in memory indefinitely, unless something else needs the memory.
You can control this behaviour (to some extent) with the fadvise system call. See its "man" page for more details.
A read/write system call will still normally need to copy the data, so if you see a real bottleneck doing this, consider using mmap() which can avoid the copy, by mapping the cache pages directly into the process.
I guess putting that file into ramdisk (tmpfs) may make enough advantage without big modifications. Unless you are really serious about response time in microseconds unit.
I have a few ideas I would like to try out in the Disk Defragmentation Arena. I came to the conclusion that as a precursor to the implementation, it would be useful, to be able to put a disk into a state where it was fragmented. This seems to me to be a state that is more difficult to achieve than a defragmented one. I would assume that the commercial defragmenter companies probably have solved this issue.
So my question.....
How might one go about implementing a fragmenter? What makes sense in the context that it would be used, to test a defragmenter?
Maybe instead of fragmenting the actual disk, you should really test your defragmentation algorithm on a simulation/mock disk? Only once you're satisfied the algorithm itself works as specified, you could do the testing on actual disks using the actual disk API.
You could even take snapshots of actual fragmented disks (yours or of someone you know) and use this data as a mock model for testing.
How you can best fragement depends on the file system.
In general, concurrently open a large number of files. Opening a file will create a new directory entry but won't cause a block to be written for that file. But now go through each file in turn, writing one block. This typically will cause the next free block to be consumed, which will lead to all your files being fragmented with regard to each other.
Fragmenting existing files is another matter. Basically, do the same, but do it on a file copy of existing files, doing a delete of the original and rename of copy.
I may be oversimplifying here but if you artificially fragment the disk won't any tests you run will be only true for the fragmentation created by your fragmenter rather than any real world fragmentation. You may end up optimising for assumptions in the fragmenter tool that don't represent real world occurrences.
Wouldn't it be easier and more accurate to take some disk images of fragmented disks? Do you have any friends or colleagues who trust you not to do anything anti-social with their data?
Fragmentation is a mathematical problem such that you are trying to maximize the distance the head of the hard drive is traveling while performing a specific operation. So in order to effectively fragment something you need to define the specific operation first