How can I configure MPU(Memory Protection Unit) for an OS_Task in Vector AUTOSAR Conguration Tool? - autosar

I want to protect an OS_Task using MPU in Vector AUTOSAR Conguration Tool. How can I do?

For a configured task, you configure OsTaskMemoryProtectionIdentifier to specify a memory protection identifier for the task. you set the protection ID.
If this parameter is not set but the owner OS-Application has a memory protection identifier specified this value will be used for the task.
Memory protection identifier are configured inside Os/OsPublishedInformation/OsDerivativeInformation/OsMemoryRegionSpecifics
Depending on the used platform protection identifiers are also referred as PID (MPC),
ASID (RH850) or protection sets (TriCore)

Related

Windows equivalent of application-scoped Linux Wallet

In Linux, there's a KDE Wallet (and GNOME Wallet) application, that stores passwords and other sensitive data. These wallets by default prevent accidental data access of application other than the one that stored the data.
E.g. if the piece of data was stored by the /bin/app1, then /bin/app2 won't have full access to that data, and the wallet will first ask the user if they really want to allow /bin/app2 to access the data stored by /bin/app1.
I find this feature important for some aspects of local data security for an application I participate in.
On Windows, a somewhat analogous UX is provided by wincred.h, but, as I currently understand, there's no any kind of per-application restrictions in it. It will provide the data access to any application started by the current user, and thus provide less security that the application-scoped defaults of Linux wallets.
Is there any way to achieve a similar application- (or vendor-) scoped security in Windows using only standard APIs?

Aren't private keys vulnerable in memory?

I'm trying to understand what happens when I use a password-protected private key to generate a message digest.
I read here that password-protected private keys are just encrypted using a password-based symmetric key.
Once I enter the correct password, how is a digest generated without exposing the unprotected private key?
At some point the cryptographic primitives in your code will need to access and use the actual value of the key. There's simply no way around that. In a simple analogy, you cannot compute a + b if you don't know a.
The big question concerning secure software design thus boils down to how long sensitive information will persist in an unprotected state. Any sort of password caching is your enemy here, but even if neither the password nor the decrypted key are explicitly cached, they're still in memory at some point. Freezing a computer with liquid nitrogen can keep the memory content intact for a considerable amount of time, and forcing a swap-to-disk is another problem.
Good cryptographic programs should take care to overwrite the memory content as promptly as feasible and minimize the amount of time that sensitive information is retained in readable form. This requires careful analysis of which information is critical (e.g. the user's password input), and platform-specific knowledge of memory management (e.g. can you request non-pageable memory?).
It all depends on your threat model - which sort of attack do you need to protect against? If a rootkit monitors all your memory, you might be in trouble, though that rootkit would probably just read the user's password entry from the keyboard anyway.
This is a complicated issue, and there's extensive research into secure hardware design. In general, the more access an attacker has to your machine, the more likely it is that she'll be able to read sensitive data. Good design can only strive to minimize the surface of attack.
At some point the key has to be available in memory for use by the crypto algorithm.
There have been interesting attacks to try and grab valuable information from memory. One I read about involved plugging a device into a Firewire controller and using direct memory access to poke around for interesting things.
http://www.hermann-uwe.de/blog/physical-memory-attacks-via-firewire-dma-part-1-overview-and-mitigation
It's entirely possible that either a program with necessary privilege to read the memory location holding the key, or hardware utilizing DMA, can grab a private key from RAM.
Generally yes, once decrypted the key will be stored in system memory as cleartext until the application or operating system marks it's address as re-writable. With PGP Desktop, it's possible to manually clear the cached private key, a nice feature I wish more applications offered.
Yes, it is exposed in RAM, and unless the operating system supports protection of memory against paging, and the application uses that feature, the private key can be paged to disk "in the clear." Development tools and active attacks can look for it in memory.
This is one reason specialized hardware cryptographic modules exist. These perform operations with the private key in their tamper-proof memory space; the application can never access the private key itself, it delegates cryptographic operations to the device.

Hide entry ID to increase security of a web app?

Over here in symfony's tutorial, it says:
Hide all ids
Another good practice in symfony
actions is to avoid as much as
possible to pass primary keys as
request parameters. This is because
our primary keys are mainly
auto-incremental, and this gives
hackers too much information about the
records of the database.
Why is that so? And does it apply to all web apps?
Exposing keys is a potential internal direct object reference risk. In essence, it's saying that if keys conform to a predictable pattern, they may be used against you in an attack. Of course other layers of security should prevent his (proper authorisation, in particular), but it's one more little layer of security. The concept is relevant to all apps which expose internal keys.
See OWASP Top 10 for .NET developers part 4: Insecure direct object reference for more info.

How can I protect a key against other applications?

Setup
I have a SQLite database which has confidential user information.
This database may be replicated on other machines
I trust the user, but not other applications
The user has occasional access to a global server
Security Goals
Any program other than the authorized one (mine) cannot access the SQLite database.
Breaking the security on one machine will NOT break the security on other machines
The system must be updatable (meaning that if some algorithm such as a specific key generation algorithm is shown to be flawed, it can be changed)
Proposed Design
Use an encrypted SQLite database storing the key within OS secure storage.
Problems
Any windows hack will allow the person to access the key for all machines which violates goal #2
Notes
Similar to this method, if I store the key in the executable, breaking the security will comprimise all systems.
Also, I have referenced windows secure storage. While, I will go to an os specific solution if I have to, I would prefer a non-os specific solution
Any idea on how to meet the design goals?
I think you will need to use TPM hardware e.g. via TBS or something similar, to actually make a secure version of this. My understanding is, TPM lets the application check that it is not being debugged or traced at a software level, and the operating system should prevent any other application pretending to the TPM module that it is your application. I may be wrong though.
You can use some kind of security-through-obscurity kludge, but it will be crackable with a debugger unless you use TPM.

What system calls to block/allow/inspect to create a program supervisor

as per Using ptrace to write a program supervisor in userspace, I'm attempting to create the program supervisor component of an online judge.
What system calls would I need to block totally, always allow or check the attributes of to:
Prevent forking or runing other commands
Restrict to standard 'safe' C and C++ libs
Prevent net access
Restrict access to all but 2 files 'in.txt' and 'out.txt'
Prevent access to any system functions or details.
Prevent the application from escaping its supervisor
Prevent anything nasty.
Thanks any help/advice/links much appreciated.
From a security perspective, the best approach is to figure out what you need to permit rather than what you need to deny. I would recommend starting with a supervisor that just logs everything that a known-benign set of programs does, and then whitelist those syscalls and file accesses. As new programs run afoul of this very restrictive sandbox, you can then evaluate loosening restrictions on a case-by-case basis until you find the right profile.
This is essentially how application sandbox profiles are developed on Mac OS X.
Perhaps you can configure AppArmor to do what you want. From the FAQ:
AppArmor is the most effective and easy-to-use Linux application security system available on the market today. AppArmor is a security framework that proactively protects the operating system and applications from external or internal threats, even zero-day attacks, by enforcing good program behavior and preventing even unknown software flaws from being exploited. AppArmor security profiles completely define what system resources individual programs can access, and with what privileges. A number of default policies are included with AppArmor, and using a combination of advanced static analysis and learning-based tools, AppArmor policies for even very complex applications can be deployed successfully in a matter of hours.
If you only wants system calls to inspect another processus, you can use ptrace(), but ou will have no guaranties, like said in Using ptrace to write a program supervisor in userspace.
You can use valgrind to inspect and hook functions calls, libraries, but it will be tedious and maybe blacklisting is not the good way to do that.
You can also use systrace, ( http://en.wikipedia.org/wiki/Systrace ) to write rules in order to authorize/block various things, like open only some files, etc... It is simple to use it to sandbox a processus.

Resources