What does the term "Leaky Abstraction" mean? (Please explain with examples. I often have a hard time grokking a mere theory.)
Here's a meatspace example:
Automobiles have abstractions for drivers. In its purest form, there's a steering wheel, accelerator and brake. This abstraction hides a lot of detail about what's under the hood: engine, cams, timing belt, spark plugs, radiator, etc.
The neat thing about this abstraction is that we can replace parts of the implementation with improved parts without retraining the user. Let's say we replace the distributor cap with electronic ignition, and we replace the fixed cam with a variable cam. These changes improve performance but the user still steers with the wheel and uses the pedals to start and stop.
It's actually quite remarkable... a 16 year old or an 80 year old can operate this complicated piece of machinery without really knowing much about how it works inside!
But there are leaks. The transmission is a small leak. In an automatic transmission you can feel the car lose power for a moment as it switches gears, whereas in CVT you feel smooth torque all the way up.
There are bigger leaks, too. If you rev the engine too fast, you may do damage to it. If the engine block is too cold, the car may not start or it may have poor performance. And if you crank the radio, headlights, and AC all at the same time, you'll see your gas mileage go down.
It simply means that your abstraction exposes some of the implementation details, or that you need to be aware of the implementation details when using the abstraction. The term is attributed to Joel Spolsky, circa 2002. See the wikipedia article for more information.
A classic example are network libraries that allow you to treat remote files as local. The developer using this abstraction must be aware that network problems may cause this to fail in ways that local files do not. You then need to develop code to handle specifically errors outside the abstraction that the network library provides.
Wikipedia has a pretty good definition for this
A leaky abstraction refers to any implemented abstraction, intended to reduce (or hide) complexity, where the underlying details are not completely hidden
Or in other words for software it's when you can observe implementation details of a feature via limitations or side effects in the program.
A quick example would be C# / VB.Net closures and their inability to capture ref / out parameters. The reason they cannot be captured is due to an implementation detail of how the lifting process occurs. This is not to say though that there is a better way of doing this.
Here's an example familiar to .NET developers: ASP.NET's Page class attempts to hide the details of HTTP operations, particularly the management of form data, so that developers don't have to deal with posted values (because it automatically maps form values to server controls).
But if you wander beyond the most basic usage scenarios the Page abstraction begins to leak and it becomes hard to work with pages unless you understand the class' implementation details.
One common example is dynamically adding controls to a page - the value of dynamically-added controls won't be mapped for you unless you add them at just the right time: before the underlying engine maps the incoming form values to the appropriate controls. When you have to learn that, the abstraction has leaked.
Well, in a way it is a purely theoretical thing, though not unimportant.
We use abstractions to make things easier to comprehend. I may operate on a string class in some language to hide the fact that I'm dealing with an ordered set of characters that are individual items. I deal with an ordered set of characters to hide the fact that I'm dealing with numbers. I deal with numbers to hide the fact that I'm dealing with 1s and 0s.
A leaky abstraction is one that doesn't hide the details its meant to hide. If call string.Length on a 5-character string in Java or .NET I could get any answer from 5 to 10, because of implementation details where what those languages call characters are really UTF-16 data-points which can represent either 1 or .5 of a character. The abstraction has leaked. Not leaking it though means that finding the length would either require more storage space (to store the real length) or change from being O(1) to O(n) (to work out what the real length is). If I care about the real answer (often you don't really) you need to work on the knowledge of what is really going on.
More debatable cases happen with cases like where a method or property lets you get in at the inner workings, whether they are abstraction leaks, or well-defined ways to move to a lower level of abstraction, can sometimes be a matter people disagree on.
I'll continue in the vein of giving examples by using RPC.
In the ideal world of RPC, a remote procedure call should look like a local procedure call (or so the story goes). It should be completely transparent to the programmer such that when they call SomeObject.someFunction() they have no idea if SomeObject (or just someFunction for that matter) are locally stored and executed or remotely stored and executed. The theory goes that this makes programming simpler.
The reality is different because there's a HUGE difference between making a local function call (even if you're using the world's slowest interpreted language) and:
calling through a proxy object
serializing your parameters
making a network connection (if not already established)
transmitting the data to the remote proxy
having the remote proxy restore the data and call the remote function on your behalf
serializing the return value(s)
transmitting the return values to the local proxy
reassembling the serialized data
returning the response from the remote function
In time alone that's about three orders (or more!) of magnitude difference. Those three+ orders of magnitude are going to make a huge difference in performance that will make your abstraction of a procedure call leak rather obviously the first time you mistakenly treat an RPC as a real function call. Further a real function call, barring serious problems in your code, will have very few failure points outside of implementation bugs. An RPC call has all of the following possible problems that will get slathered on as failure cases over and above what you'd expect from a regular local call:
you might not be able to instantiate your local proxy
you might not be able to instantiate your remote proxy
the proxies may not be able to connect
the parameters you send may not make it intact or at all
the return value the remote sends may not make it intact or at all
So now your RPC call which is "just like a local function call" has a whole buttload of extra failure conditions you don't have to contend with when doing local function calls. The abstraction has leaked again, even harder.
In the end RPC is a bad abstraction because it leaks like a sieve at every level -- when successful and when failing both.
What is abstraction?
Abstraction is a way of simplifying the world.
It means you don't have to worry about what is actually happening under the hood.
Example: Flying a 737/747 is "abstracted" away
Planes are complicated systems: it involves: jet engines, oxygen systems, electrical systems, landing gear systems etc.
...but the pilot doesn't have to worry about it... all of that is "abstracted away". The only thing a pilot needs to focus on is yoke (i.e. steering wheel of the plane).
He pushes the yoke left to go left, and right to go right etc.
....that is in an ideal world. In reality, flying a plane is much more complicated. Because many details ARE NOT "abstracted away".
Leaky Abstractions in 737 Example
Pilots in reality have to worry about a LOT of things: wind speed, thrust, angles of attack, fuel, altitude, weather problems, angles of descent. Computers can help the pilot in these tasks, but not everything is automated / simplified......not everything is "abstracted away".
e.g. If the pilot pulls up too hard on the column - the plane will obey, but then the plane might stall, and that's really bad.
In other words, it is not enough for the pilot to simply control the steering wheel without knowing anything else.........nooooo.......the pilot must know about the underlying risks and limitations of the plane before the pilot flies one.......the pilot must know how the plane works, and how the plane flies; the pilot must know implementation details..... that pulling up too hard will lead to a stall, or that landing too steeply will destroy the plane etc.
Those things are not abstracted away. A lot of things are abstracted, but not everything. The abstraction is "leaky".
Leaky Abstractions in Code
......it's the same thing in your code. If you don't know the underlying implementation details, then you're gonna have problems.
ORMs abstract a lot of the hassle in dealing with database queries, but if you've ever done something like:
User.all.each do |user|
puts user.name # let's print each user's name
end
Then you will realise that's a nice way to kill your app. You need to know that calling User.allwith 25 million users is going to spike your memory usage, and is going to cause problems. You need to know some underlying details. The abstraction is leaky.
An example in the django ORM many-to-many example:
Notice in the Sample API Usage that you need to .save() the base Article object a1 before you can add Publication objects to the many-to-many attribute. And notice that updating the many-to-many attribute saves to the underlying database immediately, whereas updating a singular attribute is not reflected in the db until the .save() is called.
The abstraction is that we are working with an object graph, where single-value attributes and mult-value attributes are just attributes. But the implementation as a relational database backed data store leaks... as the integrity system of the RDBS appears through the thin veneer of an object interface.
The fact that at some point, which will guided by your scale and execution, you will be needed to get familiar with the implementation details of your abstraction framework in order to understand why it behave that way it behave.
For example, consider this SQL query:
SELECT id, first_name, last_name, age, subject FROM student_details;
And its alternative:
SELECT * FROM student_details;
Now, they do look like a logically equivalent solutions, but the performance of the first one is better due the individual column names specification.
It's a trivial example but eventually it comes back to Joel Spolsky quote:
All non-trivial abstractions, to some degree, are leaky.
At some point, when you will reach a certain scale in your operation, you will want to optimize the way your DB (SQL) works. To do it, you will need to know the way relational databases works. It was abstracted to you in the beginning, but it's leaky. You need to learn it at some point.
Assume, we have the following code in a library:
Object[] fetchDeviceColorAndModel(String serialNumberOfDevice)
{
//fetch Device Color and Device Model from DB.
//create new Object[] and set 0th field with color and 1st field with model value.
}
When the consumer calls the API, they get an Object[]. The consumer has to understand that the first field of the object array has color value and second field is the model value. Here the abstraction has leaked from library to the consumer code.
One of the solutions is to return an object which encapsulates Model and Color of the Device. The consumer can call that object to get the model and color value.
DeviceColorAndModel fetchDeviceColorAndModel(String serialNumberOfTheDevice)
{
//fetch Device Color and Device Model from DB.
return new DeviceColorAndModel(color, model);
}
Leaky abstraction is all about encapsulating state. very simple example of leaky abstraction:
$currentTime = new DateTime();
$bankAccount1->setLastRefresh($currentTime);
$bankAccount2->setLastRefresh($currentTime);
$currentTime->setTimestamp($aTimestamp);
class BankAccount {
// ...
public function setLastRefresh(DateTimeImmutable $lastRefresh)
{
$this->lastRefresh = $lastRefresh;
} }
and the right way(not leaky abstraction):
class BankAccount
{
// ...
public function setLastRefresh(DateTime $lastRefresh)
{
$this->lastRefresh = clone $lastRefresh;
}
}
more description here.
Related
When you receive arguments in string format from the UI inside you controller, do you pass strings to application service (or to command) directly ?
Or, do you create value objects from the strings inside the controller ?
new Command(new SomeId("id"), Weight.create("80 kg"), new Date())
or
new Command("id", "80 kg", new Date())
new Command("id", "80", "kg", new Date())
Maybe it is not important, but it bothers me.
The question is, should we couple value objects from the domain to (inside) the controller ?
Imagine you don't have the web between you application layer and the presentation layer (like android activity or swing), would you push the use of value objects in the UI ?
Another thing, do you serialize/unserialize value objects into/from string like this ?
Weight weight = Weight.create("80 kg");
weight.getValue().equals(80.0);
weight.getUnit().equals(Unit.KILOGRAMS);
weight.toString().equals("80 kg");
In the case of passing strings into commands, I would rather pass "80 kg" instead of "80" and "kg".
Sorry if the question is not relevant or funny.
Thank you.
UPDATE
I came across that post while I was searching information about a totally different topic : Value Objects in CQRS - where to use
They seem to prefer primitives or DTOs, and keep VOs inside the domain.
I've also taken a look at the book of V. Vernon (Implementing DDD), and it talks about (exactly -_-) that in chapter 14 (p. 522)
I've noticed he's using commands without any DTOs.
someCommand.setId("id");
someCommand.setWeightValue("80");
someCommand.setWeightUnit("kg");
someCommand.setOtherWeight("80 kg");
someCommand.setDate("17/03/2015 17:28:35");
someCommand.setUserName("...");
someCommand.setUserAttribute("...");
someCommand.setUserOtherAttributePartA("...");
someCommand.setUserOtherAttributePartB("...");
It is the command object that would be mapped by the controller. Value objects initialization would appeare in the command handler method, and they would throw something in case of bad value (self validation in initialization).
I think I'm starting to be less bothered, but some other opinions would be welcomed.
As an introduction, this is highly opinionated and I'm sure everyone has different ideas on how it should work. But my endeavor here is to outline a strategy with some good reasons behind it so you can make your own evaluation.
Pass Strings or Parse?
My personal preference here is to parse everything in the Controller and send down the results to the Service. There are two main phases to this approach, each of which can spit back error conditions:
1. Attempt to Parse
When a bunch of strings come in from the UI, I think it makes sense to attempt to interpret them immediately. For easy targets like ints and bools, these conversions are trivial and model binders for many web frameworks handle them automatically.
For more complex objects like custom classes, it still makes sense to handle it in this location so that all parsing occurs in the same location. If you're in a framework which provides model binding, much of this parsing is probably done automatically; if not - or you're assembling a more complex object to be sent to a service - you can do it manually in the Controller.
Failure Condition
When parsing fails ("hello" is entered in an int field or 7 is entered for a bool) it's pretty easy to send feedback to the user before you even have to call the service.
2. Validate and Commit
Even though parsing has succeeded, there's still the necessity to validate that the entry is legitimate and then commit it. I prefer to handle validation in the service level immediately prior to committing. This leaves the Controller responsible for parsing and makes it very clear in the code that validation is occurring for every piece of data that gets committed.
In doing this, we can eliminate an ancillary responsibility from the Service layer. There's no need to make it parse objects - its single purpose is to commit information.
Failure Condition
When validation fails (someone enters an address on the moon, or enters a date of birth 300 years in the past), the failure should be reported back up to the caller (Controller, in this case). While the user probably makes no distinction between failure to parse and failure to validate, it's an important difference for the software.
Push Value Objects to UI?
I would accept parsed objects as far up the stack as possible, every time. If you can have someone else's framework handle that bit of transformation, why not do it? Additionally, the closer to the UI that the objects can live, the easier it is to give good, quick feedback to the user about what they're doing.
A Note on Coupling
Overall, pushing objects up the stack does result in greater coupling. However, writing software for a particular domain does involve being tightly coupled to that domain, whatever it is. If a few more components are tightly coupled to some concepts that are ubiquitous throughout the domain - or at least to the API touchpoints of the service being called - I don't see any real reduction in architectural integrity or flexibility occurring.
Parse One Big String or Components?
In general, it tends to be easiest to just pass the entire string into the Parse() method to get sorted through. Take your example of "80 kg":
"80 kg" and "120 lbs" may both be valid weight inputs
If you're passing in strings to a Parse() method, it's probably doing some fairly heavy lifting anyway. Expecting it to split a string based on a space is not overbearing.
It's far easier to call Weight.create(inputString) than it is to split inputString by " ", then call Weight.create(split[0], split[1]).
It's easier to maintain a single-string-input Parse() function as well. If some new requirement comes in that the Weight class has to support pounds and ounces, a new valid input may be "120 lbs 6 oz". If you're splitting up the input, you now need four arguments. Whereas if it's entirely encapsulated within the Parse() logic, there's no burden to outside consumers. This makes the code more extensible and flexible.
The difference between a DTO and a VO is that a DTO has no behavior, it's a simple container designed to pass data around from component to component. Besides, you rarely need to compare two DTO's and they are generally transient.
A Value Object can have behavior. Two VO's are compared by value rather than reference, which means for instance two Address value objects with the same data but that are different object instances will be equal. This is useful because they are generally persisted in one form or another and there are more occasions to compare them.
It turns out that in a DDD application, VO's will be declared and used in your Domain layer more often than not since they belong to the domain's Ubiquitous Language and because of separation of concerns. They can sometimes be manipulated in the Application layer but typically won't be sent between the UI layer and Application. We use DTO's for that instead.
Of course, this is debatable and depends a lot on the layers you choose to build your application out of. There might be cases when crunching your layered architecture down to 2 layers will be beneficial, and when using business objects directly in the UI won't be that bad.
The CQS principle says every method should either be a command that performs an action, or a query that returns data to the caller, but not both.
It makes sense for a Query not to do anything else, because you don't expect a query to change the state.
But it looks harmless if a Command returns some extra piece of information. You can either use the returned value or ignore it. Why does the CQS principle require a Command not to return any values?
But it looks harmless if a Command returns some extra piece of information?
It often is. It sometimes isn't.
People can start confusing queries for commands, or calling commands more for the information it returns than for its effect (along with "clever" ways of preventing that effect from being a real effect, that can be brittle).
It can lead to gaps in an interface. If the only use-case people can envision for a particular query is hand-in-hand with a particular command, it may seem pointless to add the pure form of the query (e.g. writing a stack with a Pop() but no Peek()) which can restrict the flexibility of the component in the face of future changes.
In a way, "looks harmless" is exactly what CQS is warning you about, in banning such constructs.
Now, that isn't to say that you might not still decide that a particular command-query combination isn't so useful to be worth it, but in weighing up the pros and cons of such a decision, CQS is always a voice arguing against it.
From my understanding, one of the benefits of CQS is how well it works in distributed environments. Commands become their own isolated unit that could be immediately executed, placed in a queue to be executed at a later date, executed by a remote event handler etc.
If the commander interface were to specify a return type then you greatly affect the strength of the CQS pattern in its ability to fit well within a distributed model.
The common approach to solving this problem (see this article for instance by Mark Seemann) is to generate a unique ID such as a guid which is unique to the event executed by the command handler. This is then persisted to allow the data to be identified at a later date.
I'm using CQRS on an air booking application. one use case is help customer cancel their tickets. But before the acutal cancellation, the customer wants to know the penalty.
The penalty is calculated based on air rules. Some of our provider could calculate the penalty through exposing an web service while the others don't. (They publish some paper explaining the algorithm instead). So I define a domain service
public interface AirTicketService {
//ticket demand method
MonetaryAmount penalty(String ticketNumber);
void cancel(String ticketNumber, MonetaryAmount penalty);
}
My question is which side(command/query) is responsible for invoking this domain service and returning result in a CQRS style application?
I want to use a Command: CalculatePenlatyCommand, In this way, it's easy to resuse the domain model, but it's a little odd because this command does not modify state.
Or should I retrieve a readmodel of ticket if this is a query? But the same DomainService is needed on both command and query side, it's odd too.
Is domain derivation a query?
There is no need to shoehorn everything in to the command-query pipeline. You could query this service independently from the UI without issuing a command or asking the read-model.
There is nothing wrong with satisfying a query using an existing model if it "fits" both the terminology and the structure of that model. No need to build up a separate read model for that purpose. It's not without risk, since the semantics and the context of the query should be closely tied to the model that is otherwise used for write purposes only. The risk I allude to is the fact that the write and read concerns could drift apart (and we're back at square one, i.e. the reason why people pick CQRS in the first place). So you must keep paying attention as new requirements come in.
Queries that fit this model really well are what I call "simulators', where you want to run a simulation using current state to e.g. to give feedback to an end user. On more than one occasion I've found that the simulation logic could be reused both as a feedback mechanism and as an execution (of a write operation/command) steering mechanism. The difference is in what we do with the outcome of the simulation. Again, this is not without risk and requires careful judgement.
You may bring arguments that Calculate Penalty Command is not odd at all.
The user asks the system to do something - command enough.
You can even have a Penalty Calculation Requested Event event in your domain, and it would feel right. Because, at some time, you may be interested in, let's say, unsure clients, ones that want to cancel tickets but they change their mind every time etc. The calculation may be performed asynchronously, too - you can provide the result (penalty cost) to the user in various ways afterwards...
Or, in some other way: on your ticket booked event, store cancellation penalty, too. Then, you can make that value accessible any time, without the need to recompute it... But this may be wrong (?) because penalty would largely depend on time, right (the late you cancel your ticket, the more you pay)?
If all this would like over-complications etc., then I guess I agree with rmac's answer, too :)
I'm looking for a generic charting solution, ideally not a hosted one that provides the following features:
Charting a tuple of values where the values are:
1) A service identifier (e.g. CPU usage)
2) A client identifier within that service (e.g. server IP)
3) A value
4) A timestamp with millisecond/second resolution.
Optional:
I'd like to also extend the concept of a client identifier further, taking the above example further, I'd like to store statistics for each core separately, so, another identifier would be Core 1/Core 2..
Now, to make sure I'm clearly stating my problem, I don't want a utility that collects these statistics. I'd like something that stores them, but, this is also not mandatory, I can always store them in MySQL, or such.
What I'm looking for is something that takes values such as these, and charts them nicely, in a multitude of ways (timelines, motion, and the usual ones [pie, bar..]). Essentially, a nice visualization package that allows me to make use of all this data. I'd be collecting data from multiple services, multiple applications, and the datapoints will be of varying resolution. Some of the data will include multiple layers of nesting, some none. (For example, CPU would go down to Server IP, CPU#, whereas memory would only be Server IP, but would include a different identifier, i.e free/used/cached as the "secondary' identifier. Something like average request latency might not have a secondary identifier at all, in the case of ping). What I'm trying to get across is that having multiple layers of identifiers would be great. To add one final example of where multiple identifiers would be great: adding an extra identifier on top of ip/cpu#, namely, process name. I think the advantages of that are obvious.
For some applications, we might collect data at a very narrow scope, focusing on every aspect, in other cases, it might be a more general statistic. When stuff goes wrong, both come in useful, the first to quickly say "something just went wrong", and the second to say "why?".
Further, it would be a nice thing if the charting application threw out "bad" values, that is, if for some reason our monitoring program started to throw values of 300% CPU used on a single core for 10 seconds, it'd be nice if the charts themselves didn't reflect it in the long run. Some sort of smoothing, maybe? This could obviously be done at the data-layer though, so its not a requirement at all.
Finally, comparing two points in time, or comparing two different client identifiers of the same service etc without too much effort would be great.
I'm not partial to any specific language, although I'd prefer something in (one of the following) PHP, Python, C/C++, C#, as these are languages I'm familiar with. It doesn't have to be open source, it doesn't have to be a library, I'm open to using whatever fits my purpose the best.
More of a P.S than a requirement: I'd like to have pretty charts that are easy for non-technical people to understand, and act upon too (and like looking at!).
I'm open to clarifying, and, in advance, thanks for your time!
I am pretty sure that protovis meets all your requirements. But it has a bit of a learning curve. You are meant to learn by examples, and there are plenty to work from. It makes some pretty nice graphs by default. Every value can be a function, so you can do things like get rid of your "Bad" values.
I'm looking to parallelize some code across a Beowulf cluster, such that the CPUs involved don't share address space. I want to parallelize a function call in the outer loop. The function calls do not have any "important" side effects (though they do use a random number generator, allocate memory, etc.).
I've looked at libs like MPI and the problem I see is that they seem to make it very non-trivial to pass complex object graphs between nodes. The input to my function is a this pointer that points to a very complex object graph. The return type of my function is another complex object graph.
At a language-agnostic level (I'm working in the D programming language, and I'm almost sure no canned solution is available here, but I'm willing to create one), is there a "typical" way that passing complex state across nodes is dealt with? Ideally, I want the details of how the state is copied to be completely abstracted away and for the calls to look almost like normal function calls. I don't care that copying this much state over a network isn't particularly efficient, as the level of parallelism in question is so coarse-grained that it probably won't matter.
Edit: If there is no easy way to pass complex state, then how is message passing typically used? It seems to me like anything involving copying data over a network requires coarse grained parallelism, yet coarse grained parallelism usually requires passing complex state so that a lot of work can be done in one work unit.
I do a fair bit of MPI programming but I don't know of any typical way of passing complex state (as you describe it) between processes. Here's how I've been thinking about your problem, it probably matches your own thinking ...
I surmise that your complex object graphs are represented, in memory, by blocks of data and pointers to other blocks of data -- a usual sort of implementation of a graph. How best can you move one of these COGs (to coin an abbreviation) from the address space of one process to the address space of another ? To the extent that a pointer is a memory address, a pointer in one address space is no use in another address space, so you will have to translate it into some neutral form for transport (I think ?).
To send a COG, therefore, it has to be put into some form from which the receiving process can build, in its own address space, a local version of the graph with the pointers pointing to local memory addresses. Do you ever write these COGs to file ? If so, you already have a form in which one could be transported. I hate to suggest it, but you could even use files to communicate between processes -- and that might be easier to handle than the combination of D and MPI. Your choice !
If you don't have a file form for the COGs can you easily represent them as adjacency matrices or lists ? In other words, work out your own representation for transport ?
I'll be very surprised (but pleased to learn) if you can pass a COG between processes without transforming it from pointer-based to some more static structure such as arrays or records.
Edit, in response to OP's edit. MPI does provide easy ways to pass complex state around, provided that the complex state is represented as values not pointers. You can pass complex state around in either the intrinsic or customised MPI datatypes; as one of the other answers shows you these are flexible and capable. If our program does not keep the complex state in a form that MPI custom datatypes can handle, you'll have to write functions to pack/unpack to a message-friendly representation. If you can do that, then your message calls will look (for most purposes) like function calls.
As to the issues surrounding complex state and the graininess of parallelism, I'm not sure I quite follow you. We (include yourself in this sweeping generalisation if you want, or not) typically resort to MPI programming because we can't get enough performance out of a single processor, we know that we'll pay a penalty in terms of computation delayed by waiting for communication, we work hard to minimise that penalty, but in the end we accept the penalty as the cost of parallelisation. Certainly some jobs are too small or too short to benefit from parallelisation, but a lot of what we (parallel computationalists that is) do is just too big and too long-running to avoid parallelisation
You can do marvelous things with custom MPI datatypes. I'm currently working on a project where several MPI processes are tracking particles in a piece of virtual space, and when particles cross over from one process' territory into another one's, their data (position/speed/size/etc) has to be sent over the network.
The way I achieved this is the following:
1) All processes share an MPI Struct datatype for a single particle that contains all its relevant attributes, and their displacement in memory compared to the base address of the particle object.
2) On sending, the process iterates over whatever data structure it stores the particles in, notes down the memory address of each one that needs to be sent, and then builds a Hindexed datatype where each block is 1 long (of the above mentioned particle datatype) and starts at the memory addresses previously noted down. Sending 1 object of the resulting type will send all the necessary data over the network, in a type safe manner.
3) On the receiving end, things are slightly trickier. The receiving process first inserts "blank" particles into its own data structure: "blank" means that all the attributes that will be received from the other process are initialized to some default value. The memory addresses of the freshly inserted particles are noted down, and a datatype similar to that of the sender is created from these addresses. Receiving the sender's message as a single object of this type will automatically unpack all the data into all the right places, again, in a type safe manner.
This example is simpler in the sense that there are no relationships between particles (as there would be between nodes of a graph), but you could transmit that data in a similar way.
If the above description is not clear, I can post the C++ code that implements it.
I'm not sure I understand the question correctly so forgive me if my answer is off. From what I understand you want to send non-POD datatypes using MPI.
A library that can do this is Boost.MPI. It uses a serialization library to send even very complex data structures. There is a catch though: you will have to provide code to serialize the data yourself if you use complicated structures that Boost.Serialize does not already know about.
I believe message passing is typically used to transmit POD datatypes.
I'm not allowed to post more links so here is what I wanted to include:
Explanation of POD: www.fnal.gov/docs/working-groups/fpcltf/Pkg/ISOcxx/doc/POD.html
Serialization Library: www.boost.org/libs/serialization/doc
it depends on organization of your data. If you use pointers or automatic memory inside your objects, it will be difficult. If you can organize your objects to be contiguous in memory, you have two choices: send memory as bytes,cast it back to object type on the receiver or define mpi derived type for your object. If however you use inheritance, things will become complicated due to how objects are laid out in memory.
I do not know your problem, but maybe can take a look at ARMCI if you manage memory manually.