Basic Software configuration in Autosar Development - autosar

In the power-on state, which component from a basic software stack will be initialized with respect to AUTOSAR? or How the BSW stack will be initialized.
Any one Suggest me?

Briefly, EcuM will be initialized first and will enter its STARTUP state. Then EcuM will initialize basic software, including drivers, in several different groups, called init blocks. EcuM will then starting the OS and, after that, also start and initialize the BSW Mode Manager (BswM), and start the scheduler. From that point on, mode machines are operational, meaning that the next steps can depend on the BswM configuration for the particular ECU.
For details, refer to the 'Specification of ECU State Manager' in Autosar, in particular section 7.3 (in Autosar version 4.3.1).

ECUm
Bswm
Schm
comM
Further elements depend on the configuration.

Related

How does RunKit make their virtual servers?

There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.

How to compute the minimal capabilities' set for a process?

What's the best way to compute a minimal set of Linux capabilities for any process?
Suppose you're hardening an operating system and some of you tools may require CAP_NET_ADMIN and related network privileges while other tools may require CAP_SYS_NICE. There should be a way to tell for each executable which capabilities are really required.
Two possible approaches to determine required capabilities at runtime:
Subsequently run your program under strace without root privileges. Determine which system calls failed with EPERM and add corresponding capabilities to your program. Repeat this until all capabilities are gathered.
Use SystemTap, DTrace or Kprobes to log or
intercept capability checks in kernel made for your program. (e.g. use capable from BCC tools suite as described here)
Unit tests with good coverage will help a lot, I guess. Also note that capabilities(7) manual page lists system calls that may require each capability (although it is not a complete list).
Update:
The article referenced by #RodrigoBelem mentions capable_probe module, which is based on KProbes.
Original article with this module was "POSIX file capabilities: Parceling the power of root" and it's not availble now (it was hosted here). But you can find the source code and some docs in the Internet.

How does Docker share resources

I've been looking into Docker and I understand from this post that running multiple docker containers is meant to be fast because they share kernel level resources through the "LXC Host," however, I haven't found any documentation about how this relationship works that is specific to the docker configuration, and at what level are resources shared.
What's the involvement of the Docker image and the Docker container with shared resources and how are resources shared?
Edit:
When talking about "the kernel" where resources are shared, which kernel is this? Does it refer to the host O.S (the level at which the docker binary lives) or does it refer to the kernel of the image the container is based on? Won't containers based on different linux distributions need to run on different types of kernels?
Edit 2:
One final edit to make my question a little more clear, I'm curious as to whether or not docker really does not run the full O.S of the image as they suggest on this page under the "How is Docker different then a VM"
The following statement seems to contradict the diagram above, taken from here:
A container consists of an operating system, user-added files, and
meta-data. As we've seen, each container is built from an image.
Strictly speaking Docker no longer has to use LXC, the user tools. It does still use the same underlying technologies with their in house container library, libcontainer. Actually Docker can use various system tools for the abstraction between process and kernel:
The kernel need not be different for different distributions - but you cannot run a non-linux OS. The kernel of the host and of the containers is the same but it supports a sort of context awareness to separate these from one another.
Each container does contain a separate OS in every way beyond the kernel. It has its own user-space applications / libraries and for all intents and purposes it behaves as though it has its own kernel.
It's not so much a question of which resources are shared as which resources aren't shared. LXC works by setting up namespaces with restricted visibility -- into the process table, into the mount table, into network resources, etc -- but anything that isn't explicitly restricted and namespaced is shared.
This means, of course, that the backends for all these components are also shared -- you aren't needing to pretend to have a different set of page tables per guest, because you aren't pretending to run more than one kernel; it's all the same kernel, all the same memory allocation pools, all the same hardware devices doing bit-twiddling (vs all the overhead of emulating hardware for a VM, and having each guest separately twiddle its virtual devices); the same block caches; etc etc etc.
Frankly, the question is almost too broad to be answered, as the only real answer as to what is shared is "almost everything", and to how it's shared is "by not doing duplicate work in the first place" (as conventional VMs do by emulating hardware rather than sharing just one kernel interacting with the real hardware). This is also why kernel exploits are so dangerous in LXC-based systems -- it's all one kernel, so there's no nontrivial distinction between ring 0 in one container and ring 0 in another.

What is the difference between lmctfy and lxc

Recently Google has open sourced lmctfy, Google's container stack. I don't understand it much, I have a few questions.
What are the differences between lmctfy and lxc and docker?
What problem does Google solve with lmctfy?
Thanks
One of the lmctfy developers here. I'll try to start with one-liners and put in some more details later.
Linux kernel supports cgroups for resource isolation (cpu, memory, blockio, network, etc) that doesn't require starting Virtual machines. It also provides namespaces to completely isolate application's view of the operating environment (process trees, network, user ids, mounts).
LXC combines cgroup and namespace support to provide an isolated environment for apps. Docker build over LXC to add image management and deployment services to it.
lmctfy works at the same level as LXC. The current release builds on cgroups and the next release will add namespace support.
Given that Docker works at a higher level, I'll just focus on differences between lmctfy and lxc.
Resource management API: LXC API is built for namespace support and exports cgroup support almost transparently. Linux cgroup API is unstable and hard to deal with. With lmctfy, we tried to provide an intent-based resource configuration without users having to understand the details of cgroups.
Priority - Overcommitment and sharing: lmctfy is built to provide support for resource sharing and for overcommitting machines with batch workloads that can run when the machine is relatively idle. All applications specify a priority and latency requirements. lmctfy manages all cgroup details to honor the priority and latency requirements for each task.
Programmatic interface: lmctfy is the lowest block of app management for Google's cloud. It's built to work with other tools and programs. We feel it's much better specified and stable for building more complicated toolchains above it.
We have lmctfy managing all of Google's resource isolation needs since 2007. So far, it was mangled into other pieces of Google Infrastructure. During a redesign, we were able to separate this layer out cleanly and thought it would be fun to put it out and give back.
I gave a Linux Plumbers talk in September about lmctfy. You can check some of the details there:
http://www.linuxplumbersconf.org/2013/ocw/events/LPC2013/tracks/153
slides: http://www.linuxplumbersconf.org/2013/ocw//system/presentations/1239/original/lmctfy%20(1).pdf

Name of a specific DRM technique

Let's say I make a program. I only want this program to run on the computers on my internal network. If I move the program to a computer that is not on my network, then my program will not run. Basically, I want to be able to control which computers can run my program by having the client validate itself with a server. I would guess this would be a subset of DRM, but what is the name of what I am trying to do?
Maybe this?
http://en.wikipedia.org/wiki/Key_server_(software_licensing)
What you described is widely supported by systems such as the Orion software license management system. A single license server running on the company's global WAN/VPN/intranet manages the set number of licenses. Depending on how the licensing is configured, the application can automatically checkout a license on startup and return it on exit, or do a longer term checkout (or activation) which means that specific system has that license and will retain it through system or application shutdown/startup cycles. The application is also automatically locked to that system on checkout so it can't just be copied to another system. The license server ensures that not more than the licensed number of instances are active at any one time.
There are a number of issues you need to think about with such a system, such as:
What if a user wants to obtain a license on a system lacking a
network connection to the server?
What happens if the user's system crashes: how can they release the
license so it can used elsewhere?
Can a user return the license so someone else can use it, and do
you want to control how frequently this can occur?
Do you want to control other limits on your product, such as a time limit, or configure product features?
Dominic

Resources