Autosar memory protection - autosar

I'm confused about autosar memory protection mechanism
I have two applications, one trusted one non trusted
I configured a memory protection region range from 0x70000000 to 0x7100000 for the trusted application, and I configured an init task for the trusted application
In init task, if I try to directly write to the memory address inside the configured range it works fine.
If however, I try to write outside the configured range (still correct memory address) I go into an exception
If it happened to a non trusted application I can understand but this is a trusted one
I thought the trusted application can write to whole memory?, what I'm missing here

AUTOSAR_SWS_OS (R19-11) has a Configuration-parameter called OsTrustedApplicationWithProtection
Parameter to specify if a trusted OS-Application is executed with memory protection or not. true: OS-Application runs within a protected environment. This means that write access is limited. false: OS-Application has full write access (default)
Sounds a bit like, your trusted OSApplication is configured here like with true instead of false and therefore also write restricted.
On the other side, ch. 14 "Outlook on Memory Protection Configuration" it states:
As stated before, memory protection configuration is not standardized yet. Nevertheless it seems helpful to contribute a recommendation in this chapter, how the configuration might work
Ch. 14.1 also gives hints, how the MPU config should be handled (SWCD/BSWMD specifying the (CODE/VAR/CONST/..) memory sections and linker-input-sections), so you should not just use arbitrary memory definitions and accessing it directly, but using the AUTOSAR memory mapping way.
And what I do not understand in your case, why do you actually restrict the trusted application by giving the MPU config just this range, instead of restricting your non-trusted application's access?

Related

Enabling NUMA on IIS when migrating to Azure VMs

So I'm trying to migrate a Legacy website from an AWS VM to an Azure VM and we're trying to get the same level of performance. The problem is I'm pretty new to setting up sites on IIS.
The authors of the application are long gone and we struggle with the application for many reasons. One of the problems with the site is when it's "warming up" it pulls back a ton of data to store in memory for the entire day. This involves executing long running stored procs and in memory processes which means first load of certain pages takes up to 7 minutes. It then uses a combination of in memory data and output caching to deliver the pages.
Sessions do seem to be in use although the site is capable of recovering session data from the database in some more relatively long running database operations so sessions are better to stick with where possible which is why I'm avoiding a web garden.
That's a little bit of background, however my question is really about upping the performance on IIS. When I went through their settings on the AWS box they had something call NUMA enabled with what appears to be the default settings and then the maximum worker processes set to 0 which seems to enable NUMA. I don't know why they enabled NUMA or if it was necessary, but I am trying to get as close to a like for like transition as possible and if it gives extra performance in this application we'll probably need it!
On the Azure box I can see options to set the maximum worker processes to 0 but no NUMA options. My question is whether NUMA is enabled with those default options or is there something further I need to do to enable NUMA.
Both are production sized VMs but the one on Azure I'm working with is a Standard D16s_v3 with 16 vCores and 64Gb RAM. We are load balancing across a few of them.
If you don't see the option in the Azure VM it's because the server is using symmetric processing and isn't NUMA aware.
Now to optimize your loading a bit:
HUGE CAVEAT - if you have memory leak type issues, don't do this! To ensure you don't, put on a private bytes limit roughly 70% the size of memory on the server. If you see that get hit/issue an IIS recycle (that event is logged by default) then you may want to ignore further steps. Either that or mess around with perfmon (or more easily iteratively check peak bytes in task manager where you'll have to add that column in the details pane)
Change your app pool startup mode to: AlwaysRunning
Change your web app to preloadenabled=true
Set an initialization page in your web.config (so that preloading knows what to load).
*Edit forgot some steps. Make sure your idle timeout is clear or set it to midnight.
Make sure you don't have the default recycle time enabled, clear that out.
If you want to get fancy you can add a loading page and set an http refresh or due further customizations seen below:
https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization

User Defined Functions Security Risk

I read that if I set my enable_user_defined_functions true in the cassandra.yaml then the User defined functions (UDFs) present a security risk, since they are executed on the server side. In Cassandra 3.0 and later, UDFs are executed in a sandbox to contain the execution of malicious code. They are disabled by default.
My question is are they executed in the sandbox after I set enable_user_defined_functions true?
Unless you explicitly set enable_user_defined_functions_threads to false (which you really shouldn't do) the UDFs will be run asynchronously to a pool locked down with limited security manager and special class loader.
You should still only allow trusted sources for your UDF code though incase there are security bugs.

Prevent truncating of a shared mapped file?

I have two processes that communicate through shared memory. One is privileged and trusted, the other is an LXC process and untrusted.
The trusted process creates a file in a directory that the LXC process can access. It sets it to a fixed size with ftrucnate.
Now it shares that file with the untrusted process by both of them mapping it read+write.
I want the untrusted process to be able to read and write to the mapping, which is safe, because the trusted process makes no assumptions about what has been written and carefully validates it.
However, with write access the untrusted process can ftruncate the file to zero (it can't increase it's size due to mount restrictions) and this causes a SIGBUS in the privileged process (I confirmed this.)
Since there are many untrusted processes which communicate with the trusted one, this is basically a denial of service attack on the entire system, and Linux permits it. Is there any way to prevent this?
I could deny access to ftruncate, but there may be other system calls to do the same thing. Surely there is a way to allow a process to write to a file but not to resize it or rename it or make any other meta data changes?
The best I can think of is fallback to the archaic System V shared memory, because that cannot be resized at all on Linux (not even by the priveledged process.)
Since Linux version 3.17 you can use file seals for that purpose. They are supported only on tmpfs, so will work with POSIX shared memory and with shared files created with memfd_create(). Before handing the file descriptor to untrusted process call fcntl(fd, F_ADD_SEALS, F_SEAL_SHRINK) and your trusted process is safe from SIGBUS.
For details see manual pages for memfd_create() and fcntl().

Architecture to Sandbox the Compilation and Execution Of Untrusted Source Code

The SPOJ is a website that lists programming puzzles, then allows users to write code to solve those puzzles and upload their source code to the server. The server then compiles that source code (or interprets it if it's an interpreted language), runs a battery of unit tests against the code, and verifies that it correctly solves the problem.
What's the best way to implement something like this - how do you sandbox the user input so that it can not compromise the server? Should you use SELinux, chroot, or virtualization? All three plus something else I haven't thought of?
How does the application reliably communicate results outside of the jail while also insuring that the results are not compromised? How would you prevent, for instance, an application from writing huge chunks of nonsense data to disk, or other malicious activities?
I'm genuinely curious, as this just seems like a very risky sort of application to run.
A chroot jail executed from a limited user account sounds like the best starting point (i.e. NOT root or the same user that runs your webserver)
To prevent huge chunks of nonsense data being written to disk, you could use disk quotas or a separate volume that you don't mind filling up (assuming you're not testing in parallel under the same user - or you'll end up dealing with annoying race conditions)
If you wanted to do something more scalable and secure, you could use dynamic virtualized hosts with your own server/client solution for communication - you have a pool of 'agents' that receive instructions to copy and compile from X repository or share, then execute a battery of tests, and log the output back via the same server/client protocol. Your host process can watch for excessive disk usage and report warnings if required, the agents may or may not execute the code under a chroot jail, and if you're super paranoid you would destroy the agent after each run and spin up a new VM when the next sample is ready for testing. If you're doing this large scale in the cloud (e.g. 100+ agents running on EC2) you only ever have enough spun up to accommodate demand and therefore reduce your costs. Again, if you're going for scale you can use something like Amazon SQS to buffer requests, or if you're doing a experimental sample project then you could do something much simpler (just think distributed parallel processing systems, e.g. seti#home)

User-Contributed Code Security

I've seen some websites that can run code from the browser, and the code is evaluated on the server.
What is the security best-practice for applications that run user-contributed code? Besides of accessing and changing the server's sensitive information.
(for example, using a Python with a stripped-down version of the standard library)
How to prevent DoS like non-halting and/or CPU-intensive programs? (we can't use static code analysis here) What about DoSing the type check system?
Python, Prolog and Haskell are suggested examples to talk about.
The "best practice" (am I really the only one who hates that phrase?) is probably just not to do it at all.
If you really must do it, set it up to run in a virtual machine (and I don't mean something like a JVM; I mean something that hosts an OS) so it's easy to restore the VM from a snapshot (or whatever the VM in question happens to call it).
In most cases, you'll need to go a bit beyond just that though. Without some extra work to lock it down, even a VM can use enough resources to reduce responsiveness so it can be difficult to kill and restart it (you usually can eventually, but "eventually" is rarely what you want). You also generally want to set some quotas to limit its total CPU usage, probably limit it to using a single CPU (and run it on a machine with at least two), limit its total memory usage, etc. In Windows, for example, you can do (at least most of that) by starting the VM in a job object, and limiting the resources available to the job object.

Resources