I'm interested in the implementation of GdH (Glasgow Distributed Haskell).
However, I could not find out which combinations in CAP theorem GdH supports.
Can we choose one of them or do programs in GdH consist of explicit processes like Erlang?
From what I've read it's essentially RMI only you're communicating using Mvars, there in my experience aren't really any limitations that any ecosystem for CAP, so much as in the problems in themselves for example YOU can use more than 5 nodes in ETCD or Zookeeper, but all of the election messages, the fact that the leader commits to all nodes, the request forwarding, the checks you have to make before applying the log, doesn't exactly make it performant well does it.
Nobody should be using RMI for a new application in 2017,honestly you would be better off using an RPC package.
https://hackage.haskell.org/package/courier which is a light weight message passing lib, Zeromq, and Best for last
http://haskell-distributed.github.io/ which is basically the closest thing you get to OTP on haskell.
Related
I wrote a program which needs to process a very large dataset and I'm planning to run it with multiple threads in a high-end machine.
I'm a beginner in Clojure and i'm lost in the myriad of tools at disposal -
agents, futures, core.async (and Quartzite?). I would like to know which one is most suited for this job.
The following describes my situation:
I have a function which transforms some data and store it in database.
The argument to the said function is popped from a Redis set.
Run the function in several separate threads as long as there is a value in the Redis set.
For simplicity, futures can't be beat. They create a new thread, and return a value from it. However, often you need more fine-grained control than they provide.
The core.async library has nice support for parallelism (via pipeline, see below), and it also provides automatic back-pressure. You have to have a way to control the flow of data such that no one's starving for work, or burdened by too much of it. core.async channels must be bounded, and this helps with this problem. Also, it's a pretty logical model of your problem: taking a value from a source, transforming it (maybe using a transducer?) with some given parallelism, and then putting the result to your database.
You can also go the manual route of using Java's excellent j.u.concurrent library. There are low level primitives as well as thread management tools for thread pools. All of this is accessible within clojure.
From a design standpoint, it comes down to whether you are more CPU-bound or I/O-bound. This affects decisions such as whether or not you will perform parallel reads from redis and writes to your database. If you are CPU-bound and thus your bottleneck is the computation, then it wouldn't make much sense to parallelize your reads from redis, or your writes to your database, would it? These are the types of things to consider.
You really have two problems to solve: (1) your familiarity with clojure's/java's concurrency mechanisms, and (2) your approach to this problem (i.e., how would you approach this problem, irrespective of the language you're using?). Once you solve #2, you will have a much better idea of which tools to use that I mentioned above, and how to use them.
Sounds like you may have a
good
embarrassingly parallel problem
to solve. In that case, you could start simply by coding up your
processing into a top-level function that processes the first datum.
Once that's working, wrap it in
a map to handle all of the
data sequentially (serially, one-at-a-time).
You might want to start tackling the bigger problem with just a few
items from your data set. That will make your testing smoother and
faster.
After you have the map working, it's time to just add a p
(parallel) to your code to make it
a pmap. This is a very
rewarding way to heat up your
machine.
Here is
a discussion about the number of threads pmap uses.
The above is the simplest approach. If you need finer control over
the concurrency, the
this concurrency screencast explores
the use cases.
It is hard to be precise w/o knowing the details of your problem. There are several choices as you mention:
Plain Java threads & threadpools. If your problem is similar to a pre-existing Java solution, this may be the most straightforward.
Simple Clojure threading with future et al. Kicking off a thread with future and getting the result in a promise is very easy.
Replace map with pmap (parallel map). This can help in simple cases that are primarily map/reduce oriented.
The Claypoole library: Lots of tools to make multithreading simpler and easier. Please see their GitHub project and the Clojure/West talk.
I am doing some research on several distributed systems such as Chord, and I would like to be able to write algorithms and run simulations of the distributed system with just my desktop.
In the simulation, I need to be able to have each node execute independently and communicate with each other, while manually inducing elements such as lag, packet loss, random crashes etc. And then collect data to estimate the performance of the system.
After some searching, I find SimPy to be a good candidate for my purpose.
Would SimPy be a suitable library for this task?
If yes, what are some suggestions/caveats for implementing such a system?
I would say yes.
I used SimPy (version 2) for simulating arbitary communication networks as part of my doctorate. You can see the code here:
https://github.com/IncidentNormal/CommNetSim
It is, however, a bit dense and not very well documented. Also it should really be translated to SimPy version 3, as 2 is no longer supported (and 3 fixes a bunch of limitations I found with 2).
Some concepts/ideas I found to be useful:
Work out what you want out of the simulation before you start implementing it; communication network simulations are incredibly sensitive to small design changes, as you are effectively trying to monitor/measure emergent behaviours from the system.
It's easy to start over-engineering the simulation, using native SimPy objects is almost always sufficient when you strip away the noise from your design.
Use Stores to simulate mediums for transferring packets/payloads. There is an example like this for simulating latency in the SimPy docs: https://simpy.readthedocs.io/en/latest/examples/latency.html
Events are tricky - as they can only fire once per simulation step, so often this can be the source of bugs as behaviour is effectively lost if multiple things fire the same event in a step. For robustness, try not to use them to represent behaviour in communication networks (you rarely need something that low-level), as mentioned above - use Stores instead as these act like queues by design.
Pay close attention to the probability distributions you use to generating randomness. Expovariate distributions are usually closer to simulating natural systems than uniform distributions, but make sure to check every distribution you use for sanity. Generating network traffic usually follows a Poisson distribution, for example, and data volume often follows a Power Law (Pareto) distribution.
I would like to ask you if it is possible to secure a server with AI/machine learning based on the following concepts:
1) the server is implemented in a way to recognize a normal behavior(authorized access, modification, ...) .
2) the server must recognize any abnormal behavior and adapt to it if encountered.
3) if an abnormal behavior is caught, it checks in some kind of pre-known threat list what type of threat it is and a possible solution for it ELSE it adapts "by itself" and perform changes based on what the normal behavior must be.
PS: If there already is a system similar to this one please let me know.
Thank you for your help!
Current IDS/IPS systems for applications ("web application firewalls") are in part similar to this (the other part is usually plain pattern matching to find common or known attacks or attack classes). First you switch a WAF to "learning mode", it listens to traffic and stores patterns as normal behavior. Then you switch it to "prevention mode" and it stops any traffic that is out of the ordinary flow.
The key is what aspects of the dataflows they listen to and learn to try and find anomalies. Basically a WAF would look at http queries to pages, learn parameter types and length, maybe clients as well, and in prevention mode it would not allow a type or length mismatch (any request not matching the learned values would be stopped on the WAF).
There are obvious drawbacks to this, the learning phase can never be long enough, learnt rules will either be too generic or too specific, manual setup is tedious for a large application, etc.
Taking it to a more generic level would be very (very) difficult. Maybe with a deep neural network (so popular nowadays) you could better approximate a "real" AI that actually learns good and bad traffic patterns. Two obvious problems are getting patterns to teach it (how will you provide good and bad traffic examples in excessive amounts so that it can actually learn the difference) and operational cost (running such a deep neural network would be very expensive, probably way more than a typical application breach would cost - defenses should be proportionate to the risk).
Having said that, I think it's not impossible, but it will take a few years until we get there.
The general idea is interesting and there is a lot of research on this topic currently: https://github.com/Limmen/awesome-rl-for-cybersecurity
But it's still quite far from being mature enough to use in practical settings.
I've noticed that all designs I have come across can be multi-threaded using the actor mode - separating each work module into a different actor and using a message queue (for me a .NET ConcurrentQueue) to pass messages. What other good multi threaded models exist?
Communicating Sequential Processes is, I think, a far better model for concurrency than the actor model. It addresses a number of problems with the actor model (and other models) such as deadlock, livelock, starvation. Take a look at this and, more practically useful, this.
The main difference is as follows. In the actor model a message is sent asynchronously. However in CSP messages are sent synchronously; the sender cannot send until the receiver is ready to receive.
This one simple restriction makes the world of difference. If you've got an incorrect design with deadlock potential then in the actor model it may or may not occur (and it usually occurs only when demo-ing to the boss...). However in CSP the deadlock will always occur, leaving you in no doubt that your design is incorrect. Ok, so you've still got to fix it but that's OK; fixing problems you know are there is much easier than attempting to exhaustively test for the absence of problems (your only choice in the actor model).
The strictly synchronous approach of CSP seems like it will cause problems with response times; for example one fears that a GUI thread can't move on because it's not been able to send a message to a busy worker thread that's not got as far as its 'read'. What you have to do is to ensure that the workload is spread across enough threads so that they can all get back to waiting for new messages within an acceptable period of time. CSP doesn't let you get away with it. The actor model does, however don't be deceived; you're just building up future problems.
In .NET a ConcurrentQueue is not the right primitive for CSP, not unless you layer a synchronising mechanism on top. I've added strict synchronisation on top of TCP sockets too. In fact I generally end up writing some sort of library that abstracts both sockets and pipes so that it becomes immaterial as to whether a 'Process' (as they're known in CSP parlance) is a thread on this machine or a whole other process on another machine at the end of a network connection. Nice - scalabilty built in from the very beginning.
I've been doing it the CSP way for 23 years now, I won't do it any other way. Built some big systems with thousands of threads that way.
==EDIT==
It seems this answer is still attracting some attention, so I thought I'd add to it. For Windows developers there is the DataFlow namespace for the Task Parallel Library. It has to be separately downloaded. Microsoft desribe it thusly: "This dataflow model promotes actor-based programming by providing in-process message passing for coarse-grained dataflow and pipelining tasks." Excellent! It uses classes like BufferBlocks as communications channels. The important thing is that a BufferBlock has a BoundedCapacity property that defaults to Unbounded, which fits the Actor model. Set this to a value of 1, and you have now transformed it into a CSP-style communcation channel.
To add to my last, there are various other multi threading models beyond CSP. This Wikipedia page lists several others like CCS, ACP, and LOTOS. Reading those articles hints at a deep and dark cavern where academics roam, waiting to pounce on a stray software developer.
The problem is that academic obscurity often means a complete lack of tools and libraries at the practical, usable level. It takes a lot of effort to convert a sound, proven academic study into a set of libraries and tools. There's little real incentive for the wider software community to take up a theoretical paper and turn it into a practical reality.
I like CSP because it's actually dead simple to implement your own CSP library based on select() or pselect(). I've done that several times now (I must learn about code re-use), plus the nice people at Kent University put together JCSP for those who like Java. I don't recommend developing in Occam (though it's still just about possible); support and maintainability are going to be issues going forward. CSP is probably the easiest one to get into, and given its good characteristics it's well worthwhile.
#JeremyFriesner
Future Problems
To expand on what I meant by "future problems", I was referring to the fact that in an asynchronous system the sender of messages has no knowledge as to whether the receiver is actually keeping up with the demand. The sender doesn't know because all it knows is that some message buffer has accepted the message. The transport underneath (e.g. tcp) then gets on with the job of pushing the message over as and when the receiver is willing to accept it.
Thus it might be that when under stress the system fails to perform as required, because the message transport will inevitably have a limited capacity to absorb messages that the receiver can't accept yet. The sender only finds this out after the problem has already begun to develop, by which time it might be too late to do anything about it.
Testing of course can reveal this problem, but you have to be careful that the testing really has exhausted the transport's ability to absorb messages. Just a quick blast at full speed might be deceiving.
Of course, a synchronous system imposes an overhead ("are you ready yet?", "no, not yet", "now?", "yes!", "here you are then") which just doesn't happen in an asynchronous system. So on average the asynchronous system will be more efficient, might actually have a higher throughput, etc. Which is why most the of the worlds systems are actually asynchronous, but also the reason why systems don't always reach the full capacity that the raw network bandwidths / processing times might suggest. When approaching full capacity asynchronous systems tend not to limit gracefully, in my opinion. Token Bus (nb not Token Ring) was a good example of a synchronous network with totally dependable and deterministic throughput but was just a little bit slower than Ethernet and Token Ring...
Having always been blessed with a surfeit of bandwidth in my problems I've chosen the synchronous route for certainty-of-success reasons; I'm not really losing out much on bandwidth, but I am losing tons of risk, which is good.
Convert from Synchronous to Asynchronous
Maybe, but it's possibly of little value. In a synchronous system it only works as per the requirement if you have successfully balanced the division of labour between threads. That is, there are enough threads doing the slow bits so that the fast bits aren't held back. Get that wrong and the system definitely isn't quick enough.
But having done that you have a system where every component is able to send messages onwards with no delay, because everything it is sending to is ready and waiting (because of your skill and judgement at balancing out the workloads). So if you did then convert to an asynchronous message transport all you're doing is saving fractionally small amounts of time in the transport of those messages. You're not making changes that will result in the workloads getting processed quicker. However, if saving bandwidth is the goal then perhaps its worthwhile.
Of course, doing this balancing can be a difficult thing, and dealing with variabilities like HDD access times, networks, etc can be difficult to overcome. I've often had to implement a 'next available' workload sharing scheme. But certainly in real time signal processing systems like the ones I play with you're basically dealing with a very dependable transport like OpenVPX's RapidIO, you're only doing sums on the data (not dealing with databases, disks, etc), and the data rates are very high (1GByte/sec is perfectly doable these days, and in fact I was handling data rates that high 13 years ago; that was haaard work). Being strictly synchronous means that you're either definitely keeping up with the data rate or definitely not. With asynchronous, it's more of a maybe...
Real Time OS for Everyone!
Having a real time OS is an essential component too, and these days it seems to be the PREEMPT_RT patch set for Linux that does the job for a lot of people in the trade. Redhat do a prepack spin of that (RedHat MRG), but for a freebie Scientific Linux from the nice people at CERN is good and free! I strongly suspect that a lot of systems would work much more smoothly near their capacity limits if PREEMPT_RT was used - it does a good job of smoothing things out.
Concurrency is a fascinating topic with a lot of approaches to implementation with the fundamental question being - "How do I coordinate parallel computations?".
Some models of concurrency are:
Futures
Futures also known as Promises or Tasks are objects that act as proxies for an asynchronously calculated result. When the value is actually needed for a calculation the thread freezes until the calculation is complete and thus, synchronization is achieved.
Futures are the preferred concurrency model for .NET and ES6.
Software Transactional Memory
Software Transactional Memory (STM) synchronizes access to shared memory (much like locks) by grouping actions into transactions. Any single transaction only sees a single view of the shared memory and is atomic. This is conceptually similar to how many databases deal with concurrency.
STM is the preferred concurrency model for Clojure and Haskell.
The Actor Model
The Actor Model focuses of message passing. An actor receives a message and can decide to send a message in response, spawn other actors, make local changes etc. This is, probably, the least tightly coupled model of these discussed as Actors exchange messages only and nothing else.
The Actor Model is the preferred concurrency model for Erlang and Rust.
Note that unlike the languages mentioned above most languages don't have cannon or preferred concurrency models and even those languages who show a strong preference for one model usually have the other ones implemented as libraries.
My personal opinion is that Futures outclass STM and Actors in simplicity of use and reasoning but none of these models are inherently "wrong" and I can think of no disadvantages for either. You could use whichever you preferred with no consequences.
The most general model for parallel processing is Petri Nets. It represents computation as pure data dependency graph, which expreses maximum parallelism. All other models stem from it.
Dataflow Computing model http://www.cs.colostate.edu/cameron/dataflow.html, http://en.wikipedia.org/wiki/Dataflow_programming is almost as powerful. It restricts Petri Net places to have only one output arc. In practice, this is useful, as places with multiple output arcs are hard to implement, cause indeterminism, and are rarely needed.
Actor model is a dataflow model where nodes may have only 2 input edges - one for input messages and one for actor's state. This is a serious restriction if you want to program functions with side-effect and more than one argument.
I've read a lot of articles about distributed Haskell. Much work has been done but seems to be in the area of distributing computations. I saw the remote package which seems to implement Erlang-style messaging passing but it is 0.1 and early stage.
I'd like to implement a system where there are many separate processes that provide distinct services, and are tied together by several main processes. This seems to be a natural fit for Erlang, but not so for Haskell. But I like Haskell's type safety.
Has there been any recent adoption of Erlang-style process management in Haskell?
If you want to learn more about the remote package, a.k.a CloudHaskell, see the paper as well as Jeff Epstein's thesis. It aims to provide precisely the actor abstraction you want, but as you say it is in the early stages. There is active discussion regarding improvements on the parallel-haskell mailing list, so if you have specific needs that remote doesn't provide, we'd be happy for you to jump in and help us decide its future directions.
More mature but lower-level than remote is the haskell-mpi package. If you stick to the Simple interface, messages can be sent containing arbitrary Serialize instances, but the abstraction is still way lower than remote.
There are some experimental systems, such as described in Implementing a High-level Distributed-Memory Parallel Haskell in Haskell (Patrick Maier and Phil Trinder, IFL 2011, can't find a pdf online). It blends a monad-par approach of deterministic dataflow parallelism with a limited ability to make the I-structures serializable over the network. These sorts of abstraction have promise for doing distributed computation, but since the focus is on computing purely-functional values rather than providing Erlang-style processes, they probably wouldn't be a good fit for your application.
Also, for completeness, I should point out the Haskell wiki page on cloud and HPC Haskell, which covers what I describe here, as well as the subsection on distributed Haskell, which seems in need of a refresh.
I frequently get the feeling that IPC and actors are an oversold feature. There are plenty of attractive messaging systems out there that have Haskell bindings e.g. MessagePack, 0MQ or Thrift. IMHO the only thing you have to add is proper addressing of processes and decide who/what is managing this addressing capability.
By the way: a number of coders adopt e.g. 0MQ into their Erlang environments, simply because it offers the possibility to structure messaging via message brokers rather then relying on pure process to process messaging in super scale.
In a "massively multicore world" I personally assume that shared memory approaches will eventually be outperforming messaging. Someone can then always come and argue with asynchrony of course. But already when you write that you want to "tie together" your processes by "several main processes" you in fact speak about synchronization. Also, you can of course challenge whether a single function, process or thread is the right level of parallelization.
In short: I would probably see whether MessagePack or 0MQ could fit my needs in Haskell and care for the rest in my code.