Zigbee routing algorithm - linux

My goal is to implement a routing algorithm for attack detection using packet delivery ration calculation on a Zigbee module(hardware). I would like to know if it is possible to implement this on popular zigbee modules like NXP, TI, SiLabs. I tried Digi Xbees already but can't find a way to modify the route discovery process. Any suggestions and ideas are welcomed.
Thanks.
Daniel Emehinola

Zigbee is a standard that defines several layers and the implementation of these layers is usually provided by the chip vendors, which in order to ensure people do not mess with these implementations, they provide many of these layers in the form of libraries so customers of these chips only have to care about implementing the application or modify existing sample apps, being said that, you need to find a Zigbee implementation that is open source, otherwise you wont be able to change the behavior or the routing mechanisms.
Hope this helps!

Related

Is there an AUTOSAR Composition-SW-Component-Type in adaptive?

I was wondering whether the AUTOSAR Composition-SW-Component-Type will apply to Adaptive i.e. in adaptive AUTOSAR, what do we call a hierarchical grouping of classic and adaptive components?
In theory? Absolutely! If you look at the meta-classes used to model the Executable it is obvious that the CompositionSwComponentType can be used.
In pratice? The time hasn‘t really come yet because of the approach for interaction of application software with the platform modules and the way how API calls identify the caller towards platform modules.
This is not so much due to hard technical limitations, there are probably ways to make it work (in a proprietary solution). But a standardization of these ways is not available.

How to implement an Anti-Corruption Layer correctly

I'm starting with DDD philosophy and I'd like to implement an integration with a legacy system that we have here. In my researches in the internet, I found some articles and samples but I must to admit: is pretty hard to understand how to do that integration correctly.
Before to ask this question, I did a search here but the results were not useful for me, so I'd like to know if is possible to send or show me a implementation sample of an ACL.
Here I have this items:
The Legacy system
The legacy database (I need to access)
The new application that will be created using the DDD approach
The initial idea is to access that database throughout NHibernate, creating just some the needed mapping classes, the domain entities and implement the business rules. According to Eric Evans, this strategy is called [Bubble Context][1]. I think this strategy will solve my problem, but I need some sample to do that in a right way.
Can someone help me?
The ACL is a pattern and not just the piece of code. In what you described you didn't say do you have strong dependencies upon legacy system or you just want to have some independent piece of code built into current system? With this you could decide will be your ACL just a service to database or will it incorporate some wrapping upon legacy system logic?
The actual pieces that you'd put into the ALC are highly depend on your implementation.
There is a generic schema of what you're asking for:
You could find more info in Eric Evan's talk.

Creating a simple mobile agent system

I am looking to create a simple mobile agent system which will deal with 4 tasks, i.e 4 different mobile agents jobs: Database update, meeting scheduling, network services discovery and kernel update.
I have done my research and have seen different frameworks such as Aglet, Jade, agent builder etc. My question is which one should i use? Also i need to setup the base code for it to work, can someone point me to a site or help me to setup the basic functions of the mobile agent?
I've read about tahiti server for the Aglet model. I'm quite confused about how to set up the mobile agent system. Any help would be much appreciated.
I have also tried to it using RMI. I had created a method of type agent, but i couldn't pass it through remote method implementation. I was reading about tcp and udp socket programming. I was thinking may be it would be more fair to do it using socket programming. In this case, would this be called an agent? I was thinking about the server sending datagram packets to multiple clients.
You need to ask yourself why you want to use mobile agents at all. The notion of a mobile agent was popular in the agent research community in the early 90's, but fell out of favour because (i) it wasn't clear what problem it was solving, (ii) the capability to allow arbitrary code to migrate to a particular computer and execute with enough privileges to access local data and services is very open to abuse, and (iii) all of the claimed benefits of mobile agents can actually be achieved though web services (REST or otherwise) and open data formats such as RDF. Consequently, few, if any, mobile agent platforms have been properly maintained since the early experiments.
It also sounds as though you need to be clear which end-user problem you want to solve. Scheduling a meeting and updating my kernel are very different tasks - I'd be very uncomfortable with a program that claims do both. If your interest is in the automation of system maintenance tasks, such as DB tuning and kernel patching, on large networks you might want to look at the SmartFrog project, or read up on autonomic computing.
I use JADE and I agree with the first guy, agent systems usually take alot of overhead to going so if you can avoid it, please do. If however you choose to proceed choose a platform with alot of support and a big user group.
Jade has some neat features like a directory facilitator DF, which works like a yellow pages so other agents don't have to know what agents are running and what services are supplied they can simply inquire by the DF.
Also JADE ContractNetBehaviours help simplify communication.

SCORM - RTE reporting mechanism security

I am investigating SCORM compliance as an option for a software project I am involved in. If this is too esoteric for SO, I am sorry - not sure where else to turn.
I am a little confused as to how the SCO (Sharable Content Object) reports a quiz score, for example, to the LMS. From what I can gather from the official documentation, this is to be done using using LMSSetValue function in the RTE API object, which is just a bunch of Javascript.
This seems wildly insecure to me, as it takes nothing to rewrite the values passed to the LMS this way.
My question is therefore, am I missing something? Are SCOs meant simply to not report such values to the LMS? It is my impression it is the only permitted mode of communication between SCOs and the LMS.
The JavaScript API is the way data is passed from the SCO to the LMS. Are there more secure ways to pass data? Sure. But the implementation is not brand-spanking new, remember. In addition, because of portability constraints, many of the most highly secure ways of passing data are not available to SCORM developers. Portability was the main priority of the standard, not security. There is a community of experts talking about what should replace SCORM. It's called Project Tin Can. And different ways of exchanging data, including cross-domain and server-side, are being discussed there.

flow-based traffic classification for traffic shaping

I’m wondering if there are ways to achieve flow-based traffic shaping with linux.
Traditional traffic shaping approaches seem be based on creating classes for specific protocols or types of packets (such as ssh, http, SYN or ACK) that need high troughput.
Here I want to see every TCP connection as a flow characterized by a certain data-rate.
There’ll be
quick flows such as interactive ssh or IRC chat and
slow flows (bulk data) such as scp or http file transfers
Now I’m looking for a way to characterize / classify an incoming packet to one of these classes, so I can run a tc based traffic shaper on it. Any hints?
Since you mention a dedicated machine I'll assume that you are managing from a network bridge and, as such, have access to the entirety of the packet for the lifetime it is in your system.
First and foremost: throttling at the receiving side of a connection is meaningless when you are speaking of link saturation. By the time you see the packet it has already consumed resources. This is true even if you are a bridge; you can only realistically do anything intelligent on the egress interface.
I don't think you will find an off-the-shelf product that is going to do exactly what you want. You are going to have to modify something like dummynet to be dynamic according to rules you derive during execution or you are going to have to program a dynamic software router using some existing infrastructure. One I am familiar with is Click modular router, but there are others. I really dont know how things like tc and ipfw will react to being configured/reconfigured with high frequency - I suspect poorly.
There are things that you should address ahead of time, however. Things that are going to make this task difficult regardless of the implementation. For instance,
How do you plan on differentiating between scp bulk and ssh interactive behavior? Will you monitor initial behavior and apply a rule based on that?
You mention HTTP-specific throttling; this implies DPI. Will you be able to support that on this bridge/router? How many classes of application traffic will you support?
How do you plan on handling contention? (you allot for 'bulk' flows to each get 30% of the capacity but get 10 'bulk' flows trying to consume)
Will you hard-code the link capacity or measure it? Is it fixed or will it vary?
In general, you can get a fairly rough idea of 'flow' by just hashing the networking 5-tuple. Once you start dealing with applications semantics, however, all bets are off and you need to plow through packet contents to get what you want.
If you had a more specific purpose it might render some of these points moot.

Resources