I'm developing an application that needs to generate an RSA key in the TPM to be used for sign operations. The public part is registered towards a backend once, and the private part is then be used repeatedly for authentication. This use will be across machine shutdown/restart cycles. At the root of my hierarchy I have an SRK-key (i.e. based on such template) generated using CreatePrimary(). I know I can regenerate this key for subsequent sessions (by saving the random input in the program's data). The program's actual signing key can also be saved in encrypted form and then loaded using the Load function once the SRK-key has been regenerated.
So functionally everything works, but I would like to avoid the key regeneration overhead. What are my options here?
1) As far as I can read (but please correct if I'm wrong) a ContextSave will not survive a power cycle (documentation says it resets across Power Reset). Please confirm if ContextSave can be used or not.
2) there's a function called EvictControl which does the job. But it involves NV memory which is limited (so if an application "forgets" such a returned handle, the memory can never be released, without resetting the whole TPM) and will wear the TPM (only on initial enrollment). Is this suitable?
3) Perhaps there's a third option I haven't found? Is there a "default" SRK that can be used so I don't have to (re-)generate one for instance?
If nothing else works I plan to use this function but I was wondering if there's a different option. I don't understand why there's not an ExportPrimary()/ImportPrimary() or such that could export a primary that could be imported later, without touching NV storage. The export would be done under a key stored inside the TPM in NV storage (only regenerated on TPM clear etc) and thereby have same security level as EvictControl but without taking up NV-storage per generated key.
From my reading of the question, it seems like you are re-generating the SRK on each power-up. You are really not supposed to do that, the SRK is supposed to be a persistent key, see https://trustedcomputinggroup.org/wp-content/uploads/TCG-TPM-v2.0-Provisioning-Guidance-Published-v1r1.pdf.
As to how to make it persistent, it seems like you already know how to do that, just use EvictControl.
Related
I came across uuidgen from watching a video to study for the redhat 8 exam but I had a question about it's usefulness and did not find any other thread nor did the manpage mention it. So I understand that each device has an UUID and the UUID can be used for multiple purposes - the purpose in the video was creating new mountpoints and the UUID was used to associate a new partition (/dev/sdb3) with a new mountpoint. Here they pointed out that you can generate a new UUID for that device using uuidgen.
My question is why/when would you use uuidgen in a production environment with live servers to relabel a partition? What would be the point of relableing the UUID for a currently existing mountpoint? Is there a sort of attack that target UUID of a system? Or is the sole purpose for uuidgen just used to create random UUIDs for others things like web links?
Thanks
Say you have a system with several disks, one partition each, and you need to play "Musical Data" with some of them.
If you start by copying, at block level via e.g. dd, the entire GPT-partitioned disk, then you will have, as a result, a duplicate UUID. This is fine if one of the duplicates is going to be blown away before the next time the OS needs to mount one of them. If, for whatever reason, this can not be ensured, then whichever copy you don't want the OS to pick up anymore, needs a new UUID. Enter uuidgen.
I'm assuming you're talking about GPT partition UUIDs, which are stored all together in each GPT that contains the identified partitions; if, instead, you're talking about filesystem UUIDs, which are stored inside the metadata for that filesystem and thus are copied whenever dd'ing that filesystem, then the above scenario still holds and more scenarios become plausible.
I have a QSPI flash on my embedded board.
I have a driver + process "Q" to handle reading and writing into.
I want to store variables like SW revisions, IP, operation time, etc.
I would like to ask for suggestions how to handle the different access rights to write and read values from user space and other processes.
I was thinking to have file for each variable. Than I can assign access rights for those files and process Q can change the value in file if value has been changed. So process Q will only write into and other processes or users can only read.
But I don't know about writing. I was thinking about using message queue or zeroMQ and build the software around it but i am not sure if it is not overkill. But I am not sure how to manage access rights anyway.
What would be the best approach? I would really appreciate if you could propose even totally different approach.
Thanks!
This question will probably be downvoted / flagged due to the "Please suggest an X" nature.
That said, if a file per variable is what you're after, you might want to look at implementing a FUSE file system that wraps your SPI driver/utility "Q" (or build it into "Q" if you get to compile/control source to "Q"). I'm doing this to store settings in an EEPROM on a current work project and its turned out nicely. So I have, for example, a file, that when read, retrieves 6 bytes from EEPROM (or a cached copy) provides a MAC address in std hex/colon-separated notation.
The biggest advantage here, is that it becomes trivial to access all your configuration / settings data from shell scripts (e.g. your init process) or other scripting languages.
Another neat feature of doing it this way is that you can use inotify (which comes "free", no extra code in the fusefs) to create applications that efficiently detect when settings are changed.
A disadvantage of this approach is that it's non-trivial to do atomic transactions on multiple settings and still maintain normal file semantics.
The scenario is as follows: an encryption key should be stored temporarily, so that multiple instances of an application can access it (sequentially). After use, the key should of course be removed from the system. However, this imposes a problem. I accept that the system is vulnerable as long as the key is stored, but I want the system to be secure before and after the key is stored.
Simply writing the key to a file and overwriting it afterwards will not work in all cases: some filesystem write changes to different parts of the disk, instead of to the same location. In that case, the key can still be retrieved afterwards. I cannot rely on the user having full-disk encryption.
Then the most logical option seems to use an other process that keeps the key in memory, but the operating system might write the memory to the disk at certain points, resulting in the same problem as described above.
Encrypting the key is possible, but this is not more secure. The whole point of storing the key temporarily, is that the user need not type it in for every run of the program. This means that the key used to encrypt the key must also be stored somewhere, or it must be based on known data. If the key is stored, then of course we now have the problem of securely storing this key. If it is based on known data, that means the key can be generated again when necessary, so encryption has little use.
I know some operating systems offer APIs to protect data, but this usually relies on generating an encryption key based on the user account information. Even if this is session-specific, the data will not be secure until the session ends (which can be long after the key should be erased).
Is there any solution to this problem at all (that does not rely on special hardware, does not require full-disk encryption, etc.)? If there is not, what is the best I can do in this case?
Edit for clarification:
The key need not be secure when it is stored in memory; at this point, the user should guarantee that no physical access is possible, and that the system is free of viruses. After the key has been used, it should be erased from the system, such that afterwards anyone with physical access, or any program, can inspect all memory and disks, and not find a single (usable) trace of the key any more.
You could use ramdisks (not tmpfs). See http://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt:
"Another similar thing is the RAM disk (/dev/ram*), which simulates a fixed size hard disk in physical RAM, where you have to create an ordinary filesystem on top. Ramdisks cannot swap and you do not have the possibility to resize them."
So basically, you need to create a filesystem on /dev/ram* and then mount it somewhere. However, this mount point can then be accessed by all processes, not only specific ones.
Hello I am a newbie to kernel programming. I am writing a small kernel module
that is based on wrapfs template to implement a backup mechanism. This is
purely for learning basis.
I am extending wrapfs so that when a write call is made wrapfs transparently
makes a copy of that file in a separate directory and then write is performed
on the file. But I don't want that I create a copy for every write call.
A naive approach could be I check for existence of file in that directory. But
I think for each call checking this could be a severe penalty.
I could also check for first write call and then store a value for that
specific file using private_data attribute. But that would not be stored on
disk. So I would need to check that again.
I was also thinking of making use of modification time. I could save a
modification time. If the older modification time is before that time then only
a copy is created otherwise I won't do anything. I tried to use inode.i_mtime
for this but it was the modified time even before write was called, also
applications can modify that time.
So I was thinking of storing some value in inode on disk that indicates its
backup has been created or not. Is that possible? Any other suggestions or
approaches are welcome.
You are essentially saying you want to do a Copy-On-Write virtual filesystem layer.
IMO, some of these have been done, and it would be easier to implement these in userland (using libfuse and the fuse module, e.g.). That way, you can be king of your castle and add your metadata in any which way you feel is appriate:
just add (hidden) metadata files to each directory
use extended POSIX attributes (setfattr and friends)
heck, you could even use a sqlite database
If you really insist on doing these things in-kernel, you'll have a lot more work since accessing the metadata from kernel mode is goind to take a lot more effort (you'd most likely want to emulate your own database using memory mapped files so as to minimize the amount of 'userland (style)' work required and to make it relatively easy to get atomicity and reliability right1.
1
On How Everybody Gets File IO Wrong: see also here
You can use atime instead of mtime. In that case setting S_NOATIME flag on the inode prevents it from updating (see touch_atime() function at the inode.c). The only thing you'll need is to mount your filesystem with noatime option.
I'm working on a Java password manager and I currently have all of the user's data, after being decrypted from a file, sitting around in memory at all times and stored plainly as a String for displaying in the UI etc.
Is this a security risk in any way? I'm particularly concerned with someone "dumping" or reading the computer's memory in some way and finding a user's naked data.
I've considered keeping all sensitive pieces of data (the passwords) encrypted and only decrypting each piece as needed and destroying thereafter... but I'd rather not go through and change a lot of code on a superstition.
If your adversary has the ability to run arbitrary code on your target machine (with the debug privileges required to dump a process image), you are all sorts of screwed.
If your adversary has the ability to read memory at a distance accurately (ie. TEMPEST), you are all sorts of screwed.
Protect the data in transit and in storage (on the wire and on the disk), but don't worry* about data in memory.
*Ok, there are classes of programs that DO need to worry. 99.99% of all applications don't, I'm betting yours doesn't.
It is worth noting that the OS might decide to swap memory to disk, where it might remain for quite a while. Of course, reading the swap file requires strong priviledges, but who knows? The user's laptop might get stolen ...
Yes it certainly is, especially since you quite trivially can debug an application. Most code dealing with encryption and unsafe data use char arrays instead of strings. By using char arrays, you can overwrite the memory with sensitive details, limiting the lifetime of the sensitive data.
In theory, you cannot protect anything in memory completely. Some group out there managed to deep freeze the memory chips and read their contents 4 hours after the computer was turned off. Even without going to such lengths, a debugger and a breakpoint at just the right time will do the trick.
Practically though, just don't hold the plaintext in memory for longer than absolutely necessary. A determined enough attacker will get to it, but oh well.