I am not quite sure if I completely understand the Entity Component System approach, and probably that is one of the reasons for the emergence of this question. Still struggling with the OOP mindset!
I am trying to create a data structure similar to a network, so for instance, something like a circuit:
So, for instance, entity 4 is connected to 1, 1 to 2, and so on. So far I have understood how to create the components, but I can't understand how the connectivity information should be stored. I believe that the entity should point to another entity?!? I have also imagined that would be even better practice to have a component that would have the connectivity information, but in that case, once again, what should it store? Ideally would be the entities themself, right? How to do it?
You should always refer to other entities using their Id and don't store pointers to them between System runs.
Entity id in Bevy stored in struct Entity
Note:
Lightweight unique ID of an entity.
Also, if this network is something unique for your game (e.g. there are maximum one instance in whole game) than it's connectivity data can be stored in the "resource". You get most benefits from Entity-Component part of ECS when your entities and components are created and processed in large quantities.
I cannot say anything more about your particular use-case because I don't know how many networks there would be, how you plan use them and interact with them. ECS is DDD pattern (Data-Driven Development) so architecture depends on how much data exists, which properties this data has and how it is used.
Example of multiple networks
If you have multiple networks, you can store them in such fashion:
#[derive(Component)]
struct NetworkTag; // Any entity with this tag is network.
// Spawn network into world like this
let network_id: Entity = world.spawn()
.insert(NetworkTag)
.id();
And you can store parts of electric network in such fashion:
#[derive(Component)]
struct PlusConnections(Vec<Entity>);
#[derive(Component)]
struct NetworkId(Entity);
// Spawn part into world like this
let part_id = world.spawn()
.insert(NetworkId(network_id))
.insert(PlusConnections(other_part_ids));
After finishing spawning parts like that, you would know only connections on plus side but you can fill negative side in separate system, for example:
#[derive(Component)]
struct NegConnections(Vec<Entity>);
fn fill_negatives_system(
query_el_parts: Query<(Entity, &PlusConnections)>,
query_el_parts_wo_neg: Query<Entity, Without<NegConnections>>,
mut commands: Commands
){
let mut positive_to_negative: HashMap<Entity, Vec<Entity>> = HashMap::new();
for (neg, pc) in query_el_parts.iter(){
for &positivein pc.0.iter(){
positive_to_negative.entry(positive).or_default().push(neg);
}
}
for (pos, negatives) in positive_to_negative{
commands.entity(pos).insert(NegConnections(negatives));
}
// At the next tick, all nodes in electric grid would know what is their negative and positive connections and in which networks they are stored.
}
Related
I am working on a multi-threaded module and need to implement map of map in golang - map[outer]map[inner]*some_struct. The outer key(map[outer]) will be accessed by multiple threads(goroutines) to add key to inner map. I have a doubt if multiple threads can concurrently add keys to inner map, for a common outer key - map[outer]. Is it thread safe and is sync.Map a better option ?
Also outer key- map[outer] and total number of outer keys are known at runtime so can't define locks beforehand.
To better understand the problem statement, we can take example of add information about different cities. We can group cities by states. Each thread represents a city. To add info about a city, first thread needs to check outer key - state,(map[state]) and then each thread will simply add info to map[state][city] = &some_struct{x:y,y:z}.
I have read few articles and found out sync.Map is suitable for concurrent map operations and these operations are performed atomically. But in documentation one of the use-case mentioned was - when multiple goroutines read, write, and overwrite entries for disjoint sets of keys.
It will be helpful if someone can suggest thread-safe approach for this problem statement.
You must thing in OO terms
What do you want to represent as map of map?
Map state, city make some sense. However what kind of operations do you want to do?
Write and Read, concurrent? Why?
Do you want to iterate over all cities? Do you need to delete cities/states?
Imagine the following interface
type DB interface {
Exists(state, city string) bool
Get(state, city string) *some_struct
Set(state, city string, data *some_struct)
Delete(state, city string)
DeleteState(state string)
ForeachCitiesInState(state string, func(city string, data *some_struct) bool)
Foreach(func(state, city…))
}
With this interface we can consider:
use a struct with a Mutex and map of maps to control the access on each read/write/delete
same as 1 but with Read Write Mutex if you have more reads than writes
if you don’t need loop over cities on a particular state, perhaps
you can create a map[ composite key ] struct like state:city to
simplify.
If you will load it from another place with a constant time interval, perhaps you should use atomic.Value to store the big map. Update is just a substitution for a more recent map.
Perhaps you can combine several rw locks. For instance one for state and another for city. You can split like
type states struct {
sync.Mutex
map[ stateName ]state
}
type state struct {
sync.Mutex
map[ cityFirstLetter ]cities
}
type cities struct {
sync.Mutex
map[ cityName ] *some_struct
}
Ideas:
Define the interface
Define (or measure) the real scenario of usage
Write benchmarks
Be careful by return a pointer to data. You can change the internal state. Consider return a copy or an interface
I have a struct which needs to be Send + Sync:
struct Core {
machines_by_id: DashMap<String, StateMachineManager>,
}
In my current implementation, StateMachineManager looks like this:
struct StateMachineManager {
protected: Arc<Mutex<StateMachines>>,
}
This works fine as long as StateMachines is Send. However, it doesn't need to be, and it complicates the implementation where I'd like to use Rc.
Performance wise, there's no reason all the StateMachines can't live on one thread forever, so in theory there's no reason they need to be Send - they could be created on a thread dedicated to them and live there until no longer needed.
I know I can do this with channels, but that would seemingly mean recreating the API of StateMachines as messages sent back and forth over that channel. How might I avoid doing that, and tell Rust that all I need to do is serialize access to the thread they all live on?
Here is a minimal example (where I have added the Send + Sync bounds to Shepmaster's comment which omitted them) -- DashMap is a threadsafe map.
What I ended up doing here was using channels, but I was able to find a way to avoid needing to recreate the API of StateMachines.
The technique is to use channels to pass a closure to a dedicated thread the StateMachines instances live on which accepts a &mut StateMachines argument, and sends the response back down a different channel, which lives on the stack while the access is happening.
Here is a playground implementing the key part
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=e07972b928d0f0b7680b1e5a988dae84
The details of instantiating the machines on their dedicated thread are elided.
I'm reading Accounting Pattern and quite curious about implementing it in CQRS.
I think AccountingTransaction is an aggregate root as it protects the invariant:
No money leaks, it should be transfer from one account to another.
public class AccountingTransaction {
private String sequence;
private AccountId from;
private AccountId to;
private MonetaryAmount quantity;
private DateTime whenCharged;
public AccountingTransaction(...) {
raise(new AccountingEntryBookedEvent(sequence, from, quantity.negate(),...);
raise(new AccountingEntryBookedEvent(sequence, to, quantity,...);
}
}
When the AccountingTransaction is added to its repository. It publishes several AccountingEntryBookedEvent which are used to update the balance of corresponding accounts on the query side.
One aggregate root updated per db transaction, eventual consistency, so far so good.
But what if some accounts apply transfer constraints, such as cannot transfer quantity more that current balance? I can use the query side to get the account's balance, but I'm worried that data from query side is stale.
public class TransferApplication {
public void transfer(...) {
AccountReadModel from = accountQuery.findBy(fromId);
AccountReadModel to = accountQuery.findBy(toId);
if (from.balance() > quantity) {
//create txn
}
}
}
Should I model the account in the command side? I have to update at least three aggregate roots per db transaction(from/to account and account txn).
public class TransferApplication {
public void transfer(...) {
Account from = accountRepository.findBy(fromId);
Account to = accountRepository.findBy(toId);
Transaction txn = new Transaction(from, to, quantity);
//unit or work locks and updates all three aggregates
}
}
public class AccountingTransaction {
public AccountingTransaction(...) {
if (from.permit(quantity) {
from.debit(quantity);
to.credit(quantity);
raise(new TransactionCreatedEvent(sequence, from, to, quantity,...);
}
}
}
There are some use cases that will not allow for eventual consistency. CQRS is fine but the data may need to be 100% consistent. CQRS does not imply/require eventual consistency.
However, the transactional/domain model store will be consistent and the balance will be consistent in that store as it represents the current state. In this case the transaction should fail anyway, irrespective of an inconsistent query side. This will be a somewhat weird user experience though so a 100% consistent approach may be better.
I remember bits of this, however M Fowler uses a different meaning of event compared to a domain event. He uses the 'wrong' term, as we can recognize a command in his 'event' definition. So basically he is speaking about commands, while a domain event is something that happened and it can never change.
It is possible that I didn't fully understood that Fowler was referring to, but I would model things differently, more precisely as close to the Domain as possible. We can't simply extract a pattern that can always be applied to any financial app, the minor details may change a concept's meaning.
In OP's example , I'd say that we can have a non-explicit 'transaction': we need an account debited with an amount and another credit with the same amount. The easiest way, me thinks, is to implement it via a saga.
Debit_Account_A ->Account_A_Debited -> Credit_Account_B-> Account_B_Credited = transaction completed.
This should happen in a few ms at most seconds and this would be enough to update a read model. Humans and browsers are slower than a few seconds. And a user know to hit F5 or to wait a few minutes/hours. I won't worry much about the read model accuracy.
If the transaction is explicit i.e the Domain has a Transaction notion and the business really stores transactions that's a whole different story. But even in that case, probably the Transaction would be defined by a number of accounts id and some amounts and maybe a completed flag. However, at this point is pointless to continue, because it really depends on the the Domain's definition and use cases.
Fixed the answer
Finally my solution is having Transaction as domain model.
And project transactions to AccountBalance but I implement special projection which make sure every data consistence before publish actual event.
Just two words: "Event Sourcing" with the Reservation Pattern.
And maybe, but not always, you may need the "Sagas" pattern also.
Description of ConcurrentBag on MSDN is not clear:
Bags are useful for storing objects when ordering doesn't matter, and unlike sets, bags support duplicates. ConcurrentBag is a thread-safe bag implementation, optimized for scenarios where the same thread will be both producing and consuming data stored in the bag.
My question is it thread safe and if this is a good practice to use ConcurrentBag in Parallel.ForEach.
For Instance:
private List<XYZ> MyMethod(List<MyData> myData)
{
var data = new ConcurrentBag<XYZ>();
Parallel.ForEach(myData, item =>
{
// Some data manipulation
data.Add(new XYZ(/* constructor parameters */);
});
return data.ToList();
}
This way I don't have to use synchronization locking in Parallel.ForEach using regular List.
Thanks a lot.
That looks fine to me. The way you're using it is thread-safe.
If you could return an IEnumerable<XYZ>, it could be made more efficient by not copying to a List<T> when you're done.
ConcurrentBag and Parallel.ForEach seems to me, no problem. If you uses this types in scenarios that has a large volume multi-user access, these classes in your implementation could rise up cpu process to levels that can crash your web-server. Furthermore, this implementation starts N tasks (threads) to execute each one of iterations, so be careful when choice this classes ans implementations. I recently spent in this situation and I had to extract memory dump to analyze whats happens into my web application core. So, be careful 'cause Concurrentbag is ThreadSafe and in web scenarios it is no better way.
What I need is a system I can define simple objects on (say, a "Server" than can have an "Operating System" and "Version" fields, alongside other metadata (IP, MAC address, etc)).
I'd like to be able to request objects from the system in a safe way, such that if I define a "Server", for example, can be used by 3 clients concurrently, then if 4 clients ask for a Server at the same time, one will have to wait until the server is freed.
Furthermore, I need to be able to perform requests in some sort of query-style, for example allocate(type=System, os='Linux', version=2.6).
Language doesn't matter too much, but Python is an advantage.
I've been googling for something like this for the past few days and came up with nothing, maybe there's a better name for this kind of system that I'm not aware of.
Any recommendations?
Thanks!
Resource limitation in concurrent applications - like your "up to 3 clients" example - is typically implemented by using semaphores (or more precisely, counting semaphores).
You usually initialize a semaphore with some "count" - that's the maximum number of concurrent accesses to that resource - and you decrement this counter every time a client starts using that resource and increment it when a client finishes using it. The implementation of semaphores guarantees the "increment" and "decrement" operations will be atomic.
You can read more about semaphores on Wikipedia. I'm not too familiar with Python but I think these two links can help:
Python Threading Library
Semaphore Objects in Python.
For Java there is a very good standard library that has this functionality:
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/package-summary.html
Just create a class with Semaphore field:
class Server {
private static final MAX_AVAILABLE = 100;
private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
// ... put all other fields (OS, version) here...
private Server () {}
// add a factory method
public static Server getServer() throws InterruptedException {
available.acquire();
//... do the rest here
}
}
Edit:
If you want things to be more "configurable" look into using AOP techniques i.e. create semaphore-based synchronization aspect.
Edit:
If you want completely standalone system, I guess you can try to use any modern DB (e.g. PostgreSQL) system that support row-level locking as semaphore. For example, create 3 rows for each representing a server and select them with locking if they are free (e.g. "select * from server where is_used = 'N' for update"), mark selected server as used, unmark it in the end, commit transaction.