I'm working on designing a simulator for our control system. There several functional areas with in the control system that need to be simulated. For example, we need to simulate an electrical system and lets say an HVAC system. The simulations are going to be on the simple side.
In real life, an HVAC system can't run if there isn't any power, so for the simulation, the HVAC model needs to know if there is power.
One approach we are looking at is to execute one functional area model on one virtual machine. Basically one VM per model.
Going back to my example, the electrical model will need to send some status to the HVAC model. Is there a way to set up a shared memory area that all the VM's can access in order to share data?
I've read about virtual disks, internal networking between the VM's but I'm not sure if this will work for us because of the amount data that may need to be shared and the latency involved with those mechanisms. Ideally, I think we need some type of global shared memory that each VM can read and write to. Obviously the shared memory will need some type of mutex mechanism.
I'm not even sure if this approach is correct versus maybe using threading and kicking off one thread per model.
Related
In the latest requirement document of Execution Manager of adaptive Autosar,
I am confused about function group and application.
in the document it says about function group and application like below
(from https://www.autosar.org/fileadmin/user_upload/standards/adaptive/19-11/AUTOSAR_SWS_ExecutionManagement.pdf)
Function Group
A Function Group is a set of coherent Processes, which
need to be controlled consistently. Depending on the state of
the Function Group, Processes are started or terminated.
Processes can belong to more than one Function Group
State (but at exactly one Function Group).
"MachineState" is a Function Group with a predefined
name, which is mainly used to control Machine lifecycle and
Processes of platform level Applications. Other Function
Groups are sort of general purpose tools used (for example) to
control Processes of user level Applications.
,
Application
An implementation that resolves a set of coherent functional requirements and is the result of functional development. An Application is the unit of delivery for Machine specific configuration and integration.
These concepts are really confusing to me.
based on my understanding I categorized applications in this way.
system applications
user applications - multiple applications in function groups
user applications - other applications
but I'm not sure this is right or not. help me to get fully understand about
function group and application so that I can categorize applications in a right way.
In the end, it is all about functions in the vehicle, which are distributed over multiple ECUs, including supporting functions and ECUs in between.
In order to save battery power, not all ECUs need to be running all the time. But it could be, that some ECUs implement multiple functions.
e.g.:
SRR (Short Range Radar, well nowadays also more than 150m range!) ECUs in the rear do BSD (BlindSpotDetection), LCA (LaneChangeAssist), RCTA (RearCrossTrafficAssist Alert/Brake), Freespace Detection, Collision Avoidance, OccupantSafeExit, Object Detection Output for 360° Vision and Fusion e.g. for AutomatedDriving ...
CentralECU, like a 360° Vision/Fusion ECU for Automated Driving, which has several sensoric ECUs like front LRR (Long Range Radar), front Cameras and front and rear SRRs connected. If this ECU is also the gateway of for the sensoric ECUs, and the rear SRRs are used for OSE, then the front ECUs can be shutdown, and the CentralECU can at least shutdown several high performance cores/processors, except the ones needed for routing between vehicle and the rear SRRs. After the driver/passenger leave the car, they can then shutdown shortly after too.
For the above scenarios, other gateways might also be involved. And also the SRRs and CentralECU need to be aware of, that other ECUs are off and not providing data like vehicle speed, yawrate, steering angle etc. and therefore, the messages are not transmitted on the networks anymore. Rx/Tx deadline monitoring should be disabled for functions which are turned off. Functions that are shutdown also in these SRRs or CentralECUs should also stop their functional messages to be transmitted.
That is, why you could have in one application multiple functions grouped into one or more function groups, because the ECU could be involved in several of them.
At least the AUTOSAR Adaptive is for the CentralECU, the SRR ECUs usually are low-price ECUs, which just run AUTOSAR Classic. But there is similar handling through PartialNetworking, VirtualFunctionCLusters and NetworkManagement.
I am currently working on a web-based MMORPG game and would like to setup an auto-scaling strategy based on Docker and DigitalOcean droplets.
However, I am wondering how I could manage to do so:
My game server would have to be splittable across different Docker containers BUT every game server instance should act as if it was only one gigantic game server. That means that every modification happening in one (character moving) should also be mirrored in every other game server.
I am trying to get this to work (at least conceptually) but can't find a way to synchronize all my instances properly. Should I use a master only broadcasting events or is there an alternative?
I was wondering the same thing about my MySQL database: since every game server would have to read/write from/to the db, how would I make it scale properly as the game gets bigger and bigger? The best solution I could think of was to keep the database on a single server which would be very powerful.
I understand that this could be easy if all game servers didn't have to "share" their state but this is primarily thought so that I can scale quickly in case of a sudden spike of activity.
(There will be different "global" game servers like A, B, C... but each of those global game servers should be, behind the scenes, composed of 1-X docker containers running the "real" game server so that the "global" game server is only a concept)
The problem you state is too generic and it's difficult to give a concrete response. However let me be reckless and give you some general-purpose scaling advices:
Remove counters from databases. Instead primary keys that are auto-incremented IDs, try to assign random UUIDs.
Change data that must be validated against a central point by data that is self contained. For example, for authentication, instead of having the User Credentials in a DB, use JSON Web Tokens that can be verified by any host.
Use techniques such as Consistent Hashing to balance the load without need of load balancers. Of course use hashing functions that distribute well, to avoid/minimize collisions.
The above advices are basically about changing the design to migrate from stateful to stateless in as much as aspects as you can. If you anyway need to provide stateful parts, try to guess which entities will have more chance to share stateful data and allocate them in the same (or nearly server). For example, if there are cities in your game, try to allocate in the same server the users that are in the same city, since they are more willing to interact between them (and share stateful data) than users that are in different cities.
Of course if the city is too big and it's very crowded, you will probably need to partition the city in more servers to avoid overloading the server.
Your question is too broad and a general scaling problem as others have mentioned. It'd have been helpful if you'd stated more clearly what your system requirements are.
If it has to be real-time, then you can choose Redis as your main DB but then you'd need slaves (for replication) and you would not be able to scale automatically as you go*, since Redis doesn't support that. I assume that's not a good option when you're working with games (Sudden spikes are probable)
*there seems to be some managed solutions, you need to check them out
If it can be near real-time, using Apache Kafka can prove to be useful.
There's also a highly scalable DB which has everything you need called CockroachDB (I'm a contributor, yay!) but you need to run tests to see if it meets your latency requirements.
Overall, going with a very powerful server is a bad choice, since there's a ceiling and it'd cost you more to scale vertically.
There's a great benefit in scaling horizontally such an application. I'll try to write down some ideas.
Option 1 (stateful):
When planning stateful applications you need to take care about synchronisation of the state (via PubSub, Network Broadcasting or something else) and be aware that every synchronisation will take time to occur (when not blocking each operation). If this is ok for you, lets go ahead.
Let's say you have 80k operations per second on your whole cluster. That means that every process need to synchronise 80k state changes per second. This will be your bottleneck. Handling 80k changes per second is quiet a big challenge for a Node.js application (because it's single threaded and therefore blocking).
At the end you'll need to provision precisely the maximum amount of changes you want to be able to sync and perform some tests with different programming languages. The overhead of synchronising needs to be added to the general work load of the application. It could be beneficial to use some multithreaded language like C, Java/Scala or Go.
Option 2 (stateful with routing):*
In some cases it's feasible to implement a different kind of scaling.
When for example your application can be broken down into areas of a map, you could start with one app replication which holds the full map and when it scales up, it shares the map in a proportional way.
You'll need to implement some routing between the application servers, for example to change the state in city A of world B => call server xyz. This could be done automatically but downscaling will be a challenge.
This solution requires more care and knowledge about the application and is not as fault tolerant as option 1 but it could scale endlessly.
Option 3 (stateless):
Move the state to some other application and solve the problem elsewhere (like Redis, Etcd, ...)
I'm have a working Java EE application that runs a multitude of threads. I want to move these threads off my application and simply have access to their data (Strings and ints).
How should I achieve this if I want to say call a method on my web application that accesses a threads data on a different server/JVM?
Say you wanted to separate these layers (perhaps put them on different machines or for scalability) you would separate the data layer from your presentation layer and put them in different JVMs. You make the data layer provide a service to the presentation layer. How you do this depends on your preferred transport. e.g. it could be a web service, or you can use RMI, JMS, TCP, shared memory.
In any case, one JVM can only access the data of another process through services that JVM exposes. (Except in the case of shared memory, but it's not easy to get this working unless your data model is very simple)
accesses a threads data
In Java almost all data is on the heap, which is shared by many threads. Very little of it is scoped completely to an individual thread. So the very idea of moving a thread and only "its data" does not make much sense.
And it's not just Java, pretty much any language with shared mutable state will face the same issues.
Of course your application can have a concept of thread-owned data, but that would be application logic and not part of java itself or its Thread class.
In Xenomai's API of Posix skin, I find the following:
POSIX skin.
Clocks and timers services.
Condition variables services.
Interruptions management services.
Message queues services.
Mutex services.
Semaphores services.
Shared memory services.
Signals services.
Threads management services.
Thread cancellation.
Threads scheduling services.
Thread creation attributes.
Thread-specific data.
I can't see anything regarding the file handling and socket programming, so I am guessing that perhaps file handling and sockets are not to be dealt in the real time? Is the guess wrong?
Please guide.
Xenomai and its origin, RTAI, both take control of your scheduler, handling the linux kernel itself as a non-real-time thread.
They have provided many services, most of which as you can see is related to threads and synchornization that do NOT call Linux API (in kernel space) or system calls (in user space). As you know, real-time is all about "guaranteeing deadline" and calling Linux violates it (because Linux doesn't guarantee anything).
Since drivers are also important in real-time systems, they have implemented the real-time driver model, or RTDM that helps both implementing and using device drivers in a real-time context.
File handling in kernel is strongly frowned upon. If you are talking about user-space real-time applications, then you can have access to any drivers that is implemented in RTDM. If you don't find one for file handling or sockets, then no you can't use them. Note that even a printf uses Linux system calls and is forbidden.
Note that if you do use them, nothing breaks, you just lose your real-time-ness! I personally do use files for logging, but only call them in case of an error that means real-time is already ruined anyway.
I don't know about Xenomai, but at least in RTAI, if you call a Linux system call, then you get a warning like "RTAI: LXRT changed mode: syscall ..." in your kernel logs.
The real-time is a property of the ENTIRE SYSTEM. To achieve property of real-time in the system all its components (including hardware, operating system, drivers, libraries, and applications) should be designed taking into account the requirements applied to real-time systems. Such components (like RTOS) can be used to build a real-time system. But they usage doesn't automatically mean that final system will be a real-time system. Actually, if at least one of the component of your system doesn't support requirements of real time systems, your entire system won't be real-time!
Real-time systems usually has resources significantly exceeding the average requirements of the real-time tasks. Unconsumed resources can be used for performing useful but non-critical background tasks, such as logging, monitoring of the system state, statistics collection and analysis, etc. Applications that will perform this tasks can be designed as non real-time components which run atop of real-time components. This design is safe if you are sure that all components participating in real-time tasks support requirements of real-time. Due to this direct answer to your question:
It completely depends from application. In general, all code, that is not used in handling of real-time tasks, CAN BE written as non real-time. All code which is used in the handling of real-time tasks MUST BE written as real-time.
What the Xenomai is doing is isolation of the non-real time Linux and its activities used for handling of non-real-time tasks in the special container, which is run atop of RTOS kernel and in parallel with RTOS-based real-time tasks. To build real-time system on the Xenomai bases your application should rely only to the Xenomai API and on the other libraries and APIs which are known and are proven to be a real-time. All background activities which can be useful, but completely uncritical can be written as a ordinal Linux applications.
Such systems and services as storage and network services usually are not used in real-time tasks, because the commonly used hardware is very indeterministic and thus doesn't fit well into real-time concept. It is hard to say a priory how much time it will take to send five packets over network or write a file into the HDD. Due to this, interfaces for such systems are not commonplace. But again, the application dictates what real-time services it needs. I can imagine real-time tasks, which involve storage and network actions. In the case of such tasks designer is forced to find such system components, which will provide real-time storage and network services. As you can see, Xenomai is not a candidate.
I'm a developer of a MMO game and currently we're at my company facing some scalability issues which, I think, can be resolved with proper clustering of the game world.
I don't really want to reinvent the wheel that's why I think Linux Virtual Server could be a good choice especially with some Level 7 load balancing technique.
I'm currently looking at ktcpvs as a load balancing solution and wonder if it's a proper choice.
The main idea is to have a number of zones("locations" in terms of my game) running on dedicated servers. When a player decides to go to some specific location the load balancer decides which zone server will be actually serving the player(that's actually why I need a Level 7 load balancer)
What do you folks think about all said above?
Update: I posted the same question to LVS users mailing list http://marc.info/?l=linux-virtual-server&m=124976265209769&w=2
Update: I also started the similar topic on the gamedev.net forum http://www.gamedev.net/community/forums/topic.asp?topic_id=544386
In order to address your question we need to understand whether you need volume or response, but it is difficult to get both at the same time.
Layer 7 load balancing - is data based application level balancing, so the data content of the network packet needs to be routed to an end-point. You can achieve volume (more users) by implementing routing at the application level, service level or kernel level.
Scalability - I assume you are running out of memory, CPU resources and network bandwidth.
Application level - your application logic receives an application packet and routes accordingly.
Service level - your system framework (front end service of some kind) receives the packet and through a module - performs the routing (think of custom apache module, even network driver modules - like writing a network filter)
Kernel level - Performs routing at network packet level.
The closer you move to the metal, the better your response will be. I suggest using dedicated linux server up-front to perform the routing - go native, not virtual. Use multiple or teamed network adapters for the WAN and a dedicated adapter for each end-point (one+ wan, one each for each connected app server)
If response time is important then you need a kernel/supervisor state solution, it will save you a few context switches but be aware that you need to limit hops at all costs and could better be served by fewer, larger machines and your scalability will always be limited. There is a risk in using KTCPVS, it is quite old and not actively updated. If you judge that it works for you great, otherwise consider writing something akin to a network filter as long as it runs in system state.
If volume is important but response time is secondary, implement a custom built high-speed socket switch built in C++ running in problem/user state. It is the easiest to maintain and will offer the best scalability.
You will need to build some prototypes to figure out what suits your needs best.
Final thoughts -
Before doing any of the above first ensure that you have optimized your game design. You may know most of this, I list it here for the benefit of all.
(a) messages should fit comfortably within one network packet, less than 1500 bytes for most home routers
(b) Try to fit the logic of the routing in your game client instead of your servers. A simple download of a small table with zones and IP addresses to a client will allow you to forego all of the above.
(c) Try to limit zone visibility by to the clients, they should know about their zones and adjacent zones only (if you implement the point b above)
Hope this helps, sorry I cannot be more specific regarding KTCPVS.
You haven't specified where the bottleneck is. Network Traffic? Disk IO? CPU Cycles?
Assuming you mean a layer 7 load balancer and don't have enough CPU power, I think LVS ist not the optimal choice. I have done Web Server load balancing with LVS, which works straightforward and isn't exactly complicated.
But I think load balancing an MMORP this way needs considerable amounts of additional code in LVS, it might be easier to do the load balancing with a multithreaded application distributed over some multicore server. But this isn't fully scalable, this only gets you to 16 cores without prohibitve cost increase.
The biggest issue in something like this is what happens when players are near a boundary. Obviously they need to be able to see and interact with each other, but they're on separate servers. So you need some pretty fancy inter-server communication, sometimes just duplicating messages to both servers. It can get even more complicated when someone is near a "corner", and then you have to deal with 4 servers!
The book Massively Multiplayer Game Development has a chapter on "The Pitfalls of Shared Server Boundaries" which covers this issue in detail.
I haven't heard of Linux Virtual Server before now, so I don't understand how it fits. I think your actual server application needs to support this game-specific load balancing, rather than trying to run a cluster and assuming that it will automatically know how to split up your application (which it won't). If I were you, I would write the server program to handle its own piece of land, and it should connect to the pieces of land around it, and then design a server-to-server protocol for the passing of these messages ("here comes a player, I'm going to start telling you about him!" "make sure to tell me about messages near our boundary", "okay the player is out of my territory and into yours, here's his detailed data", etc). I think it's a bit more complicated than just running a different flavor of Linux and assuming you'll get automatic load balancing.
Why are you moving the distribution logic to the loadbalancer? It's a component that's not free and can break. It seems your clients are quite aware of which zone they're in. It seems they could very well connect to zone<n>.example.com. You'd then handle loadbalancing at DNS level.