I am developing a software in Java which will run a TFTP Client created as a part of the software and will connect to a TFTP Server which will be external.
My question is should I have the TFTP Client as an actor inside the system boundary using the Actor symbol, as this is something that the system will do? or should I leave it outside the system boundary?
Actors are by definition the things outside of your system with which your system interacts.
So an Actor inside your system boundary would be a contradictio in terminis
It depends on your abstraction level. It is well possible to define actors inside the system under consideration which you represent as the main boundary. Usually if you introduce such actors inside it means that you have sub-systems where those actors interact. So in your case the TFTP is such a sub-system.
Related
I am reading this AUTOSAR documents, and it says that the communication of the software components with each other and the communication of the SWCs with BSW is realized via RTE Layer, but I did not find any information about the implementation of the interactions between basic software modules with each other, for example, the interaction between ECU AL layer and MCAL Layer.
There are 3 types of interfaces according to AUTOSAR. They are : "autosar interface", "standardized autosar interface" and" standardized interface".
The "autosar interface" is used to define the ports for SWCs. Then there is "standardized autosar interface" type, which is used to define SERVICE ports for SWCs. These two types both are used for SWCs and both models the communication mechanism using ports, but the "standardized interface" do NOT use the same technique as "autosar interface".
I mean that probably the "standardized interface" contract do NOT use PORTS to define the communication between BSW modules and I want to know that if there are no ports for BswMs, then how do they communicate? Are the communication mechanisms modeled in those modules themselves?
1. Do the BSW Modules have ports?
2. Does RTE define the communication between MCAL Layer & ECU Abstraction layer?if not, then in which part of the code it should be implemented?
1.) Ports are the preferred way of interface definition on software component's level. The "standardized intefaces" are SwC ports that give access to underlying BsW module functionality. They are categorized as service-ports and their components are typically bound to an ECU.
Vendors exposing BSw module functionality to other SwC-s by their custom SwC is breaking the layered architecture and it is considered bad practice. (Thank you Uwe for pointing that out in the comments)
2.) On a module's level interfaces are header files with certain function declarations in them. Inter-BSwM communication is not RTE's task by design. Also, boot sequence is bottom up; it could happen RTE is not even booted yet for BSwM-s to use.
Think of ports as a logical feature. Based on their type you have some freedom in modelling them. When you generate the RTE these ports are realized (or in other words boil-down) to a certain solution: writing a buffer or as simple as a C function call. This abstraction even enables you to re-allocate your SwCs to another AUTOSAR ECU and the ports will still function.
You do not have this much freedom in BSwMs; their communicaton is more concrete, with C function calls. BSwMs can be optionally modelled in AUTOSAR: their interfaces as well as their internal behaviour. But unlike in SwC, such models are not having a direct effect on the implementation. They are rather for documenting, writing tests or compliance checks.
I am building cross-platform application, consisting of several modules, exchanging with each other.
That means my question is related to both Windows and Linux.
Q: If using TCP/IP for inter-process communication, is there any kind of special optimization, performed by OS in case both endpoints are on localhost?
Somewhere I've heard, in this case Windows can bypass network drivers and use just shared memory. I have no idea about the source/proof of this statement, but the idea to switch off some unused stuff sounds logic.
Is that true and if yes, where I can read the details?
I tried to generate code for a device changing the implementation type to SharedLibrary and renaming the Entry Point to Device_Name.so. I was able to generate and build, but in main.cpp it kept a main function not a make_component to be called by ComponentHost. the device constructors deals with arguments that ComponentHost doesn't, like the Device Manager IOR. I believe this functionality extension implies changing the source code of ComponenHost. Is it part of REDHAWK roadmap? any comments on how can I make it work?
So are you trying to use the shared process space within a node to communicate between devices and services? Because I don't believe that there is tooling specifically for this yet, but I think there is a way to do this. Just to be clear I haven't tried this, but based on the test used by the bulkio ports to determine local vs remote transport usage, I think this will work.
If you look at the persona pattern, you'll see that there is a Programmable Device which is responsible for loading Persona Devices. Most of the details for this aren't necessary for what you're trying to do, but the pattern should be helpful. To accomplish communication between Devices using shared memory, you could generate a Programmable device whose sole purpose is to forward parameters from the DeviceManager to the Personas. The Personas would then act as your normal Devices normally do, just launched in the same process space as one another.
The code generators for the Programmable and Persona Devices are not yet integrated into the IDE, so you'll have to create a new Device project in eclipse for each Device you want (so that you'll have the spd files). Be sure to add the appropriate AggregateDevice interface to your Devices. This let's the framework know that multiple devices can technically be considered one entity, but you can also individually communicate with each. Also make sure that the Programmable is an Executable Device, since it needs to launch the Persona Devices. Then, from the command line, you can run redhawk-codegen - - pgdevice </path/to/programmable/spd> to generate a Programmable Device, and redhawk-codegen - - persona </path/to/persona/spd> to generate your Persona Device(s).
Once all of this is done, you'll notice the main function for your Programmable launches the Device like you described in your question. However, the main function for the Personas has code to launch the Device as either a standalone Device or as simply an object in its own thread.
This should allow the bulkio ports of the Programmable and Personas to communicate with each other via shared memory. Obviously this will break down if you attempt to push data out of the process, at least until someone adds interprocess shared memory via something like shm. Not sure if that's on the road map, but it would certainly be neat.
Update: It appears that interprocess shared memory was added in RH 2.1.2, so you should be able to communicate between collocated Devices, Services, and Components using that mechanism. This renders the above unnecessary, but I'm going to leave it for earlier versions of RH.
Let me know if you have any questions!
As of RH 2.1.2, the default behavior for Devices/Services/Components whose user code uses redhawk::buffer for the data memory allocator and the stream API for interaction with the bulkio port is to use a shared memory transport between C++ entities that are running in different processes
It is said that applications in smartphone operating systems work inside a secure sandbox. What sort of security does it provide?
If it is secured, and other applications do not have read write permissions to another app, then how does inter process communication come in to the picture? If processes(applications) can communicate via interprocess communication methods, how does it bypass the sandbox?
Your question is very broad, but since it is tagged with blackberry-10 I assume you are interested in this platform specifically.
At the lowest level, the QNX kernel is essentially an interprocess communications system. The kernel doesn't actually do anything other than pass messages from one process to another. So that is how IPC is managed at the low level.
At the highest level, and most mundane implementation, BB10 uses shared files. If the owner grants an application permission, it may read from and write to a set of directories that are shared with other applications given that permission.
For direct IPC BB10 has what BlackBerry calls the invokation framework. This allows processes to share not only data, but executable code and user interface elements. The owner sees this as the saring system, and cards.
I am working on SBC(Single Board Computer) board with Red Hat Linux, which is being used to get information from many routers and process packets.
Can this Gateway be called an "Embedded Linux based" product?
I would call it embedded if its purpose has been shifted from a general purpose computer to a device or appliance that has a specific task. Further, customization for that specific task should probably remove/disable/mitigate some other general purpose functionality (e.g. running it headless, disabling/removing X or general use tools/services in order to further enable the device to do its job.)
Basically look at the device and discern whether it appears as "a computer running linux" or "an appliance which completes a specific task USING linux."
See this question regarding which systems can be described as embedded. In industry terms, I would say that a headless Linux device is said to be "embedded".
I don't agree that it needs to be headless to be considered embedded. For example, mobile phones are considered embedded but they've got video, i/o and what nots. Personally, I think that there is no 'clear' line for embedded. But generally, when you are working with minimal resources (e.g. minimal RAM) and performing very specific functions (i.e. not general purpose) then it's more embedded.
Short answer: Yes
From wikipedia:
An embedded system is a
special-purpose computer system
designed to perform one or a few
dedicated functions, often with
real-time computing constraints. It is
usually embedded as part of a complete
device including hardware and
mechanical parts. In contrast, a
general-purpose computer, such as a
personal computer, can do many
different tasks depending on
programming. Embedded systems control
many of the common devices in use
today.
While I think your device isn't embedded on another device, I see that has little functions and is not a general purpose computer.
Also, as Shashikiran says, SBCs are usually called embedded systems.
PC-104 drived me crazy some years ago...