What is the difference between MultiAgent Systems and Distributed Computing - multithreading

I'm curious about differences between distributed and multi-agent systems. I have seen many fundemental similarities and my mind is confused.
Similarities:
1- there are multiple processing units
2- both are used for computing and simulation applications
3- processing units interacting
4- processing units work collectively and become powerfull machine
5- units work with their own properties like own specific clock, own specific processor speed, own memory etc..
So what is the difference(s)?

It is a matter of abstraction and purpose. Multi-agent systems employ powerful high-level abstractions, based on complex (i.e. intelligent) components, which are usually not found in regular distributed system created only to split simple number crunching algorithms over different machines. Multi-agent systems can be used to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Distributed computing can be used to solve problems that are embarrassingly parallel. Sure, there are similarities, but if you look close at their abstractions, they can profoundly contrast, leveraging from different algorithms and data structures.

In my perspective the key is the definition of (intelligent) agent. S. Russel and P. Norvig in their "Artificial Intelligence: A Modern approach" defined:
An agent is anything that can be viewed as perceiving is environment through sensors and acting upon that environment through actuators.
So a multi-agent system will be formed by a collection of agents that perceive the environment and and act upon it but remain in some degree independent and decentralized, with a local view to the environment.
A distributed system is (usually) defined as a collection of nodes performing distributed calculations, linked together to multiply processing power.
In a way a MAS is a distributed system, but has some characteristics that make it unique. It depends on the usage and the particular implementation of the system but in some way those definitions overlap a bit.

The question is a bit old but I will still take a shot at it.
We can start by looking at definitions.
Distributed system [1]:
We define a distributed system as one in which hardware or software components located at networked computers communicate and coordinate their actions only by passing messages. This simple definition covers the entire range of systems in which networked computers can usefully be deployed.
Multiagent system [2]:
Multiagent systems are those systems that include multiple autonomous entities with either diverging information or diverging interests, or both.
So, fundamentally, "Distributed" is concerned with the architecture of a system while "Multiagent" is concerned with a specific method of problem solving employed in a system.
By virtue of being distributed, a system is made up of several networked computers. A multiagent system, on the other hand, can exist in a networked environment or on a single non-networked computer.
References
[1] G. Couloris, J. Dollimore, T. Kindberg, G. Blair, Distributed Systems Concepts and Design (Fifth Edition), 2012, Addison-Wesley.
[2] Y. Shoham, K. Leyton-Brown, Multiagent Systems: Algorithmic Game-Theoretic and Logical Foundations (Revision 1.1), 2010, Cambridge Univ. Press.

When I think about Distributed Computing, load is distributed to multi parts, be it multi-thread or multi-computers. In the distributed computing, every part is parallel, that is they are almost the same. Some last computing parts that collects and summarizes results of others may be different than others.
Multi Agent Systems as its name implies has multiple agents that work together to accomplish a goal. Different than Distributed Computing, a multi agent system may work on single computer but it will certainly have more than one agent. These agents may be collector agent, reporter agent, computing agent, ....

Related

Use SimPy to simulate Chord distributed system

I am doing some research on several distributed systems such as Chord, and I would like to be able to write algorithms and run simulations of the distributed system with just my desktop.
In the simulation, I need to be able to have each node execute independently and communicate with each other, while manually inducing elements such as lag, packet loss, random crashes etc. And then collect data to estimate the performance of the system.
After some searching, I find SimPy to be a good candidate for my purpose.
Would SimPy be a suitable library for this task?
If yes, what are some suggestions/caveats for implementing such a system?
I would say yes.
I used SimPy (version 2) for simulating arbitary communication networks as part of my doctorate. You can see the code here:
https://github.com/IncidentNormal/CommNetSim
It is, however, a bit dense and not very well documented. Also it should really be translated to SimPy version 3, as 2 is no longer supported (and 3 fixes a bunch of limitations I found with 2).
Some concepts/ideas I found to be useful:
Work out what you want out of the simulation before you start implementing it; communication network simulations are incredibly sensitive to small design changes, as you are effectively trying to monitor/measure emergent behaviours from the system.
It's easy to start over-engineering the simulation, using native SimPy objects is almost always sufficient when you strip away the noise from your design.
Use Stores to simulate mediums for transferring packets/payloads. There is an example like this for simulating latency in the SimPy docs: https://simpy.readthedocs.io/en/latest/examples/latency.html
Events are tricky - as they can only fire once per simulation step, so often this can be the source of bugs as behaviour is effectively lost if multiple things fire the same event in a step. For robustness, try not to use them to represent behaviour in communication networks (you rarely need something that low-level), as mentioned above - use Stores instead as these act like queues by design.
Pay close attention to the probability distributions you use to generating randomness. Expovariate distributions are usually closer to simulating natural systems than uniform distributions, but make sure to check every distribution you use for sanity. Generating network traffic usually follows a Poisson distribution, for example, and data volume often follows a Power Law (Pareto) distribution.

Why do modern OSses (Linux, Windows, Solaris) implement a one - one thread model?

Reading my textbook for my OS class, which is Operating Systems Concepts, 8th edition, by Silberschatz, Galvin, and Gagne, I came across something interesting in the chapter on threads.
In introducing thread models, they start with:
Many to one
-Stating essentially this does not provide true concurrency
Next they move to:
One to one
-Stating this provides true concurrency, but suffers from thread amount restrictions because of overhead with creating too many threads.
Lastly, they move on to the seemingly evident solution:
Many to many
Which apparently takes the best of both worlds.
Yet if you notice in the one-one section, it states Linux, along with the Windows family of operating systems, implement the one-to-one model.
And further in the book after the last image...
So if many-to-many is the best solution, why do Linux, Windows, and Solaris (and maybe others) implement one-to-one?
...and apologies if this should go in programmers SE.
Solaris changed models from MxN to 1:1 because the complexity of managing threads at two different levels did not produce the expected benefits, the lack of a direct mapping caused issues meeting POSIX threads requirements in areas such as signal handling, and the cost of creating a new kernel thread for each user space thread was not too high. Sun published a white paper at the time of the switch which provides more details: Multithreading in the Solaris™ Operating Environment.

Is there a fundamental flaw in operating system structure? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
At some point I read that operating systems were meant to be created in a certain way (i.e. 'microkernel') to be resistant to fault but are made in another way (i.e. 'monolithic') for practical purposes such as speed. Whilst this is not the question, it does bring up the question:
Have any fundamental tradeoffs been made in computer architecture that have reduced security from the earlier theoretical systems?
I'm looking for answers that are accepted in the field of computer science, not opinions on current implementations. For example programs could run faster if they were all built on custom hardware, this is known, but this is impractical which is why we have general computers.
Anyone saying that microkernels are "the right way" is wrong. There is no "right way". There is no objectively best approach to security.
The problem is that security is not fundamental to computing. It's a sociological issue in a lot of ways, it only exists because humans exist - unlike computation, which is a literal science.
That said, there are principals of security that have held true and been incorporated into hardware and software, like the principal of least privilege. The kernel in an operating systems, on the hardware, runs at a higher privilege level than userland processes. That's why your program can't actually interact with hardware, and has to use system calls to do so.
There are also issues of complexity, and various measurements of complexity. Programs tend to get more complex as our needs grow - instead of a basic game of pong we now have 1,000 AI units on a giant map. Complexity goes up, and our ability to reason about the program will likely go down, opening up holes for vulnerabilities.
There's no real answer to your question but this - if there is an objective method for security we haven't discovered it yet.
SECURITY IS NOT A FUNCTION OF NATURE OF KERNEL.
Types of kernels has nothing to do with the security of the operating system. Though I would agree that the efficiecny of an operating system does depend on it's nature.
Monolithic kernel is a single large processes running entirely in a single address space. It is a single static binary file. Whereas, in Microkernels, the kernel is broken down into separate processes, known as servers. Some of the servers run in kernel space and some run in user-space. All servers are kept separate and run in different address spaces.The communication in microkernels is done via message passing.
Developers seem to prefer micro-kernels where as it provides flexibility and also it is more easy to work with different userspaces. Monolithic is somewhat complex in it's nature and is beneficial for lightweight systems.
Is their some fundamentally flawed way our computers are structured
that allow all the security holes that are found? What I mean by this,
is that there are sometimes the proper theoretical ways to do things
in computer science that satisfy all our requirements and are robust,
etc, .
Their are certain concepts like protection-ring and capability based security and all,but, at the end this depends on the requirements of the system. For more clarity be sure to visit the links provided. SOmewhat minor ideas are highlighted below.
Capability-based_security :- Although most operating systems implement a facility which resembles capabilities, they typically do not provide enough support to allow for the exchange of capabilities among possibly mutually untrusting entities to be the primary means of granting and distributing access rights throughout the system.
Protection_ring :- Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. This is generally hardware-enforced by some CPU architectures that provide different CPU modes at the hardware or microcode level. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number).

Machine dependent languages

Why might a machine-dependent language be more appropriate for writing certain types of programs? What types of programs would be appropriate?
Why might a machine-dependent language
be more appropriate for writing
certain types of programs?
Speed
Some machines have special instructions sets (Like MMX or SSE on x86, for example) that allows to 'exploit' the architecture in ways that compilers may or may not utilize best (or not utilize at all). If speed is critical (such as video games or data-crunching programs), then you'd want to utilize the best out of the architecture you're on.
Where Portability is Useless
When coding a program for a specific device (take the iPhone or the Nintendo DS as examples), portability is the least of your concerns. This code will most likely never go to another platform as it's specifically designed for that architecture/hardware combination.
Developer Ignorance and/or Market Demand
Computer video games are prime example - Windows is the dominating computer game OS, so why target others? It will let the developers focus on known variables for speed/size/ease-of-use. Some developers are ignorant - they learn to code only on one platform (Such as .NET) and 'forget' that others platforms exist because they don't know about them. They seem to take an approach similar to "It works on my machine, why should I bother porting it to a bizarre combination that I will never use?"
No other choice.
I will take the iPhone again as it is a very good example. While you can program to it in C or C++, you cannot access any of the UI widgets that are linked against the Objective-C runtime. You have no other choice but to code in Objective-C if you want to access any of those widgets.
What types of programs would be
appropriate?
Embedded systems
All of the above apply - When you're coding for an embedded system, you want to take advantage of the full potential of the hardware you're working on. Be it memory management (Such as the CP15 on ARM9) or even obscure hardware that is only attached to the target device (servo motors, special sensors etc).
The best example I can think of is for small embedded devices. When you have to have full control over every detail of optimization due to extremely limited computing power (only a few kilobytes of RAM, for example), you might want to drop down to the assembler level yourself to make everything work perfectly in those small confines.
On the other hand, compilers have gotten sophisticated enough these days where you really don't need to drop below C for most situations, including embedded devices and microcontrollers. The situations are pretty rare when this is necessary.
Consider virtually any graphics engine. Since your run-of-the-mill general purpose CPU cannot perform operations in parallel, you would have a bare minimum of one cycle per pixel to be modified.
However, since modern GPUs can operate on many pixels (or other piece of data) all at the same time, the same operation can be finished much more quickly. GPUs are very well-suited for embarrassingly parallel problems.
Granted, we have high-level-language APIs to control our video cards nowadays, but as you get "closer to the metal", the raw language used to control a GPU is a different animal from the language to control a general purpose CPU, due to the vast difference in architectures.

Threading paradigm?

Are there any paradigm that give you a different mindset or have a different take to writing multi thread applications? Perhaps something that feels vastly different like procedural programming to function programming.
Concurrency has many different models for different problems. The Wikipedia page for concurrency lists a few models and there's also a page for concurrency patterns which has some good starting point for different kinds of ways to approach concurrency.
The approach you take is very dependent on the problem at hand. Different models solve various different issues that can arise in concurrent applications, and some build on others.
In class I was taught that concurrency uses mutual exclusion and synchronization together to solve concurrency issues. Some solutions only require one, but with both you should be able to solve any concurrency issue.
For a vastly different concept you could look at immutability and concurrency. If all data is immutable then the conventional approaches to concurrency aren't even required. This article explores that topic.
I don't really understand the question, but if you start doing some coding using CUDA give you some different way of thinking about multi-threading applications.
It differs from general multi-threading technics, like Semaphores, Monitors, etc. because you have thousands of threads concurrently. So the problem of parallelism in CUDA resides more in partitioning your data and mixing the chunks of data later.
Just a small example of a complete rethinking of a common serial problem is the SCAN algorithm. It is as simple as:
Given a SET {a,b,c,d,e}
I want the following set:
{a, a+b, a+b+c, a+b+c+d, a+b+c+d+e}
Where the symbol '+' in this case is any Commutattive operator (not only plus, you can do multiplication also).
How to do this in parallel? It's a complete rethink of the problem, it is described in this paper.
Many more implementations of different algorithms in CUDA can be found in the NVIDIA website
Well, a very conservative paradigm shift is from thread-centric concurrency (share everything) towards process-centric concurrency (address-space separation). This way one can avoid unintended data sharing and it's easier to enforce a communication policy between different sub-systems.
This idea is old and was propagated (among others) by the Micro-Kernel OS community to build more reliable operating systems. Interestingly, the Singularity OS prototype by Microsoft Research shows that traditional address spaces are not even required when working with this model.
The relatively new idea I like best is transactional memory: avoid concurrency issues by making sure updates are always atomic.
Have a looksee at OpenMP for an interesting variation.

Resources