What is the main motive of introducing Adaptive Autosar?
Information provided by Autosar consortium is "AP provides mainly high-performance computing and communication mechanisms and offers flexible software configuration."
High performance computing will be achieved through many/multi core processors,
Ethernet will be used for communication
Application will be programmed in C++ language and POSIX will be used.
My doubts are :
Multi core is already used in Classic platform
Since Autosar is completely Software, how usages on many core FPGA etc will be considered in autosar scope.
Ethernet is also available for Classic Platform.
How C++ fulfill the motive of flexibility, security and high computation?
What is contribution of POSIX in Adaptive autosar?
Classic AUTOSAR (especially AUTOSAR OS) is based on static configuration of OS objects like e.g. tasks (mainly because of and through the largely OSEK-like OS; simply said, AUTOSAR OS is OSEK++).
Main point of adaptive AUTOSAR will be to change that concept, introducing dynamically creatable OS objects. Imagine that an adaptive AUTOSAR system would allow to load executables which were unknown at built time.
(Not discussing here whether that is a safe/secure design.)
See my answer :
Multi core is already used in Classic platform
Yes but it is uC core and the performance.. capability is completely different with uP core i.e some state of the art uP core A53, A57 based.
Why.. uP designed for High Performance Applications.
uC hard to render a HD videos ... but uP does.
Since Autosar is completely Software, how usages on many core FPGA etc will be considered in autosar scope.
Autosar do not only refer to Software but it turns out Hardware requirements as following.
Eg. You could not port a POSIX OS compliant to uC
FPGA can be configure as a SoC for that you can even have a uC and uP running on same board. The rest is free to use.. Autosar Classic in uC and Autosar Adaptive in uP.
Ethernet is also available for Classic Platform.
Autosar Adaptive not even defined what is communication protocol
it just say ara::com following with many Spec and Requirement.. that make vendor or Autosar Provider can implement COM in various way... regards Service Oriented motivations.
How C++ fulfill the motive of flexibility, security and high computation?
It is hard to explain all in one here...
But to fulfill it, we need a completely new platform supports(called Foundations in Adaptive)
Eg. To handle safety we will not start an application via systemd(Linux) or Init (Android) but we need completely new Function to do it : Execution Manager - Adaptive Autosar.
What is contribution of POSIX in Adaptive autosar?
It only related to OS requirement, where at least some "system API" need to be support by OS. The list of system API you can find in POSIX PSE 51.
Related
I'm currently learning about low level computing like bootloader and kernel, and stumbled on vesa bios extension, the standard for graphics display controller.
But after reading some documents about it, I'm not sure how BIOS, developed by motherboard manufacturer, can configure / utilize graphics card which is completely independant from them.
I'm aware about VGA which also turned out to graphics standard available with BIOS function, but it has specific IO ports dedicated for certain functions that every VGA compatible graphics card also have. I'm not really sure about this, but I think that BIOS functions for VGA actually utilize these ports to provide functions like switching modes, etc.
However Super-VGA, which kinda is reason VBE was created, as far as I know, does not have any standarlized port or MMIO for extended features. And so does VBE (At least I couldn't find any documentation about IO port or MMIO).
Since video card nowadays came up with even more proprietary ways to communicate with CPU and usually offers graphics driver binary hiding implementation details, I can't imagine how BIOS extension can offer unified way of utilizing video card.
Thanks for reading.
The VBE standard isn't really agnostic to graphics cards. It is a BIOS interface so it doesn't give much details about how the firmware implementing the BIOS needs to access the graphics card. You can find a link to the VBE 3.0 standard on the Wikipedia page for VBE. In the standard, it states:
The VBE standard defines a set of extensions to the VGA ROM BIOS services. These functions can be accessed under DOS through interrupt 10h, or be called directly by high performance 32-bit applications and operating systems other than DOS.
These extensions also provide a hardware-independent mechanism to obtain vendor information, and serve as an extensible foundation for OEMs and VESA to facilitate rapid software support of emerging hardware technology without sacrificing backwards compatibility.
What I understand here is that it isn't the way to interact with the card from software that is standardized. Instead, VBE standardizes how to find the information to know how to interact with the card. Then, the mobo manufacturer writes code that uses this information to implement the standardized BIOS interface that your OS uses to show graphics on your screen.
Most of the time, your OS doesn't use the discrete GPU for graphics. Instead, it uses the integrated GPU in the CPU to kickstard the OS until it can detect the proprietary driver to use the more powerful discrete GPU.
During installation, it can use the integrated GPU and automatically use vendor web APIs to search for the proper driver for your discrete GPU and install it in the background while you are using the computer. It might ask you to reboot after its installation if it is needed.
More recent discrete GPUs are PCI-Express so their mechanism to detect the type of card and the vendor is standardized and a list of vendors and type of devices is maintained by PCI-SIG group a non-profit organization maintaining PCI. This mechanism is MMIO like you mentionned so you read some standard registers mapped in main memory and they return IDs that you can compare to the public device/vendor lists from PCI-SIG. After this, it comes down to the driver model of the OS and its mechanisms to support generic drivers (drivers that you can use even though you don't know how they work). The most common type on Linux for graphics is the character device but, AFAIK there are others. The integrated CPU graphics cards are often very open because their interface are available for download free of charge from corresponding vendors like Intel or AMD.
You can read my answer at https://cs.stackexchange.com/questions/149203/how-bits-translated-into-text-on-the-screen/149215#149215 for some more information. You can also read some of my other answers. The information I give is probably not 100% accurate but it is a good starting point to gather more precise/accurate information from actual standards and documentation (some of them might not be free or even quite costly like PCI which can cost thousands). Anyway, don't be scared to dig in the actual standards. They are not as hard to read as people think especially if you are only looking for a general knowledge about how it works. You can probably just skim the documents and get a proper high-level overview.
I'm confusing myself at the moment on how the CPU relates to the TPM.
When I tried learning about Apple's Enclave (TPM), the video I watched made it seem like the TPM is a separate processing unit connected to the CPU. As in the TPM itself is a microprocessor connected to the main processing unit.
However, when I tried to learn about ARM TrustZone TPM (found in Android based devices), the article I am reading made it seem like the TPM is within the CPU, not separate. The article specifically states "ARM TrustZone Technology is a hardware-based solution embedded in the ARM processor cores that allows the cores to run two execution environments".
I am having a hard time finding the answer online. I just want to understand the data flow so I can better understand mobile based security options for applications.
Think of the TPM as a specification that describes the inputs and outputs necessary for its operation. Theoretically you could implement this specification purely in software and remain compliant with it. You could also implement it as firmware running on another chip. However, the more removed from the host OS and other hardware the implementation is, the more secure it's considered -- as it makes it harder to compromise the secrets it holds -- so the so called "discrete implementation" is the preferred one, if it can be afforded.
While reading through Embedded Linux System Design and Development, I came across the following text
So when we talk about the
MIPS HAL it means the support for the MIPS processors and the boards built
with MIPS processors. When we talk about a BSP we refer to the software
that does not have the processor support software but just the additional
software for supporting the board. The HAL can be understood as a superset
of all supported BSPs and it additionally includes the processor-specific
software.
What exactly is the Hardware abstraction layer (Talking in terms of Linux) ? Is this in some way related to BSP ? From my understanding, BSP is the Board specific code such as Bootloader, kernel core, specific drivers for the peripherals etc. How does the HAL come up as a superset to BSP ?
I don't think HAL is a Linux specific concept, i.e. it's not a subsystem or a proper logical grouping of code. It's possible that the authors have introduced it in order to help explain other concepts. In a way, operating system kernels can be described as HAL since they abstract away the hardware, providing uniform interface to user space. So the exact answer will only be in the context of the book.
In bare metal/RTOS-based embedded systems, HAL layer, if present, would sit on top of drivers in order to provide same API to higher layers even when underlying drivers or the physical components (like peripheral or bus connecting micro controller to the peripheral) change. It is different from board bringup code or bootloader which runs before HAL becomes useful.
Hope that addresses your query.
Anybody let me know:
what is ISCP, is it a hardware device / software in telecom ?
How is it related to IBM, does it supports ISCP software ?
Could be helpful if any link provided as well
The Integrated Service Control Point (ISCP) was developed in the late 80s by two brilliant engineers, John O'Brien and Dave Babson, at Bellcore in NJ. Initial development was funded by Bell Atlantic. It was revolutionary in its use of "microcomputers" to implement flexible services using the SS7 network that sits aside of the regular Telco networks. Aside from the services, the infrastructure of the ISCP was designed for very high performance and reliability. The maximum downtime allowed under any circumstance was 1 minute/year. The ISCP provides services as software programs rather than hardcoded switches. IBM was chosen as the vendor to provide the hardware (RS6000s and AIX) to implement the network interfaces and signalling services. The ISCP was meant to replace the SCPs that weren't as flexible and based on VAX/VMS.
ISCP(Integrated Services Control Point) means
A software system that integrates service control point (SCP) features
that allow for the efficient deployment of customized intelligent
network services.
It is a system software in telecom.
IBM SmartCloud Orchestrator 2.2.0 supports installing the ISCP.cfg file and installing the first box.For Better understanding,please see the links provided.
You can see the following links:
http://www-01.ibm.com/common/ssi/rep_ca/8/897/ENUS100-288/ENUS100-288.PDF
http://www.prnewswire.co.uk/news-releases/iscp-software-takes-on-new-look-for-intelligent-networks-156550635.html
http://www-01.ibm.com/support/knowledgecenter/SS4KMC_2.2.0/com.ibm.sco.doc_2.2/t_install_modify.html
I was recently elected programming team lead for my community college's engineering club. We're going to put a solar panel on a roof. The programming part involves
Controlling servos to adjust the orientation of the panel
Sending data on the electricity collected by the panel to a server (we haven't decided whether we want this to be via a wired or wireless connection.)
Although I know a fair amount about programming in general, I know next to nothing about networking or microcontrollers.
Can you recommend any books I can read to familiarize myself with these topics? Is there an obvious choice of programming language and library for either domain? Any linux man pages I should read? I'm actually not sure whether the computer we'll put on the roof will be running Linux or Windows. So I'd appreciate recommendations for both OS's.
Will Beej's guide to network programming
http://beej.us/guide/bgnet/
be useful, or is it only for internet applications and not local networks? Is there software that operates at a higher level than sockets that I should use instead?
If nothing else, give me some non-obvious keywords I can use to search on Google.
I'd look at the arduino platform, it's a very simple platform for building things exactly like this on top of it: http://arduino.cc
And from Wikipedia
Arduino is a physical computing platform based on a simple open hardware design for a single-board microcontroller, with embedded I/O support and a standard programming language.1 The Arduino programming language is based on Wiring and is essentially C/C++ (several simple transformations are performed before passing to avr-gcc).[2] The goal of the Arduino project is to make tools available that are accessible, low-cost, low capital investment, flexible and easy-to-use for artists and hobbyists. Particularly those who might not otherwise have access to more sophisticated controllers that require more complicated tools.[3]
because that ethernet is popular, so i suggest that design layer 2 as ethernet type.
for physical layer and wireless or wired, there're so many datasheet and specification samples and design guides which you could found at http://developer.intel.com and http://software.intel.com, both chips level or drivers level. enjoy that.