Manage multiple signal speed in a Gnu-Radio flow graph - scapy

I am currently working on Z-Wave protocol.
With my HackRF One and scapy-radio I try to sniff the communications between two devices.
However devices can transmit at different speeds :
9,6 kbps
40 kbps
100 kbps
As I can only decode communications at 40 kbps, I imagine my graph is unable to manage other speeds.
Some informations about Z-Wave communications :
Frequency (EU) : 868.4 MHz
Modulation : GFSK
And my GRC graph :
So my question is : How to modify the graph to decode and sniff 9,6 and 100 kbps signal too ?

As an easy workaround, I would suggest to take the input stream from the HackRF and connect it into 3 different decoders, each one with the desired parameters. Then each Packet sink block will publish messages at the same Socket PDU block.
I am not familiar with the Z-Wave, but if the 3 different data rates share the same spectrum bandwidth, then there is no more job for you and you are done.
But if they do, which I believe that is true for your case, you need some extra steps.
First of all you have to sample the time domain signal with the maximum sampling rate required by the Z-Wave. For example, if for the 3 different data rates the spectrum bandwidth is 4, 2 and 1 MHz you have to sample with 4e6 samples/s. Then you perform SRC (Source Rate Conversion), also known as re-sampling, for each of the different streams. So for the second rate you may want to re-sample your input stream of 4e6 samples/s to 2e6 samples/s.
Then you connect re-sampled streams at the corresponding decoding procedures
+---------------+
|Rest blocks 0 |
+---------------------------------> |
| | |
| +---------------+
|
+------------+ +--------------+ +---------------+
| | | | |Rest blocks 1 |
| Source +----------> Resampler 1+-------------> |
| | | | | |
+------------+ +--------------+ +---------------+
|
| +--------------+ +---------------+
| | | |Rest blocks 2 |
+-----> Resampler 2+--------------> |
| | | |
+--------------+ +---------------+
GNU Radio already ships with some resamplers, you can start using the Rational Resampler block.

Related

Optimally traverse a DAG with weighted vertices in parallel

There is a graph where vertices represent pieces of code and edges represent dependencies between them. Additionally, each vertex has two numbers: how many threads the corresponding piece of code can use (1, 2, ..., or "as many as there are cores"), and how much time it is estimated to take if it gets that many threads (compared to others - for example, 1, 0.1 or 10). The idea is to run the pieces of code minding their dependencies in parallel, giving them such numbers of threads that the total execution time is the smallest.
Is there some existing algorithm which would do that or which I could use as a base?
So far I was thinking as follows. For example, we have 8 threads total (so NT = 8T) and the following graph.
+----------------+ +----------------+
+-+ A: 0.2x, 1T +----+ | F: 0.1x, 1T |
| +---+------------+ | +---+------------+
| | | |
| +---v------------+ | +---v------------+
| | B: 0.1x, 2T +-+ | | G: 0.3x, NT +-+
| +----------------+ | | +----------------+ |
| | | |
| +----------------+ | | +----------------+ |
+-> C: 0.4x, 1T | | +----> H: 0.1x, 1T | |
+--+-------------+ | +--+-------------+ |
+----+ | | |
| +----------------+ | +--v-------------+ |
| | D: 0.1x, 1T <-+ | J: 1.5x, 4T <-+
| +--+-------------+ +-------+--------+
| | |
| +--v-------------+ |
+-> E: 1.0x, 4T +------------+ |
+----------------+ | |
+--v----v--------+
+ I: 0.01x, 1T |
+----------------+
At task I we have 2 dependencies, E and J. As J dependencies, we have F-G and A-H. For E, A-C and A-B-D. To get to J, we need 0.3x on A-H and 0.4x on F-G, but G needs many threads for that. We could first run A and F in parallel (each with a single thread). Then we would run G with 7 threads and as A finishes, H with 1 thread. However there's also the E branch. Ideally, we would like it to be ready 0.5 later than J. In this case, it's quite easy because the longest path to E when we have already processed A takes 0.4 using one thread, and the other path takes less than that and uses just 2 threads - so we can run these calculations when J is running. But if, say, D took 0.6x, we would probably need to run it in parallel with G as well.
So I think I could start with the sink vertex and balance the weights of subgraphs on which it depends. But given these "N-thread" tasks, it's not particularly clear how. And considering that the x-numbers are just estimates, it would be good if it could make adjustments if particular tasks took more or less time than anticipated.
You can model this problem as a job shop scheduling problem (flexible job shop problem in particular, where the machines are processors, and the jobs are slices of programs to be run).
First, you have to modify a bit your DAG, in order to transform it into another DAG which is the disjunctive graph representing your problem.
This transformation is very simple. For any node i, t, nb_t representing the job i, that need t seconds to be performed with 1 thread, and that can be parallelized into nb_t threads, do the following:
Replace i, t, nb_t by nb_t vertices i_1, t/nb_t, ..., i_(nb_t), t/nb_t. For each incoming/outgoing edge of the node i, create an incoming/outgoing edge from/to all the newly created nodes. Basically, we just split each job that can be parallelized into smaller jobs that can be handled by several processors (machines) simultaneously.
You then have your disjuntive graph, which is the input to the job shop problem.
Then, all you need to do is to solve this well-known problem, there are different options available ....
I would advice using a MILP solver, but from the small search I just did, it seems like many meta-heuristics can tackle the problem (simulated annealing, genetic programming, ...).

How to show “if” condition on a use case description?

When we write a use case table * (id, description, actor, precondition, postcondition, basic flow, alternate flow)*, in basic flow, we show plain steps of interactions between the actors and the system. I wonder how to show a condition in the use case basic flow? AFAIK, the basic flow contains plain simple steps one by one for use case. But I cannot show conditions without pseudocode? Are pseudocodes allowed in the basic flow of UML use case description?
What would be steps for below sequence?
For the above diagram, should be the table below?
-------------------------------------------------------------
| ID | UC01 |
-------------------------------------------------------------
| Description | do something |
-------------------------------------------------------------
| Precondition | -- |
-------------------------------------------------------------
| Postcondition | -- |
-------------------------------------------------------------
| Basic flow | 1. actor requests system to do something |
| | 2. if X = true |
| | 2.1 system does step 1 |
| | else |
| | 2.3 system does step 2 |
| | 3. system return results to actor |
-------------------------------------------------------------
| Alternate flow| -- |
-------------------------------------------------------------
In tools like Visual Paradigm you can model flow of events with the if/else and loop conditions, and specify the steps as user input and system response.
Use Alternate and Exceptional flows to document such behavior.
do something and step 1 are clearly of different levels, better put them into separate use cases.
Actor is not the best name for actor's role, let's say it's a User.
I had to change Step 1 to Calculation 1 to avoid confusion.
Example
------------------------------------------------------------------------
| ID | UC01 |
------------------------------------------------------------------------
| Level | User goal, black box |
------------------------------------------------------------------------
| Basic flow | 1. User requests Robot System to do something. |
| | 2. Robot System performs UC02. |
| | 3. Robot System return results to User. |
------------------------------------------------------------------------
------------------------------------------------------------------------
| ID | UC02 |
------------------------------------------------------------------------
| Level | SubFunction, white box |
------------------------------------------------------------------------
| Basic flow | 1. Robot System validates that X is true. |
| | 2. Robot System does Calculation 1. |
------------------------------------------------------------------------
| Alternate flow 1 | Trigger: Validation fails at step 1, X is false. |
| | 2a. Robot System does Calculation 2. |
------------------------------------------------------------------------

Is it possible to tell cassandra to run a query only on the local node

I've got two nodes that are fully replicated. When I run a query on a table that contains 30 rows, cqlsh trace seems to indicate it is fetching some rows from one server and some rows from the other server.
So even though all the rows are available on both nodes, the query takes 250ms+ rather than 1ms for other queries.
I've already got consistency level set to "one" at the protocol level, what else do you have to do to make it only use one node for the query?
select * from organisation:
activity | timestamp | source | source_elapsed
-------------------------------------------------------------------------------------------------+--------------+--------------+----------------
execute_cql3_query | 04:21:03,641 | 10.1.0.84 | 0
Parsing select * from organisation LIMIT 10000; | 04:21:03,641 | 10.1.0.84 | 68
Preparing statement | 04:21:03,641 | 10.1.0.84 | 174
Determining replicas to query | 04:21:03,642 | 10.1.0.84 | 307
Enqueuing request to /10.1.0.85 | 04:21:03,642 | 10.1.0.84 | 1034
Sending message to /10.1.0.85 | 04:21:03,643 | 10.1.0.84 | 1402
Message received from /10.1.0.84 | 04:21:03,644 | 10.1.0.85 | 47
Executing seq scan across 0 sstables for [min(-9223372036854775808), min(-9223372036854775808)] | 04:21:03,644 | 10.1.0.85 | 461
Read 1 live and 0 tombstoned cells | 04:21:03,644 | 10.1.0.85 | 560
Read 1 live and 0 tombstoned cells | 04:21:03,644 | 10.1.0.85 | 611
………..etc….....
It turns out that there was a bug in Cassandra versions 2.0.5-2.0.9 that would make Cassandra more likely to request data on two nodes when it only needed to talk to one.
Upgrading to 2.0.10 or greater resolves this problem.
Refer: CASSANDRA-7535

How to code two-way duplex streams in NodeJS

In the latest few versions of NodeJS (v0.10.X as of writing), the Streams API has undergone a welcome redesign and I would like to start using it now.
I want to wrap both the input and output of a socket with an object which implements a protocol.
The so-called Duplex interface, seems to just be any stream which is readable and writable (like a socket).
It is not clear whether Duplexes should be like A or B, or whether it doesn't matter.
+---+ +---+
-->| A |--> | |-->
+---+ | B |
| |<--
+---+
What is the correct code-structure/interface for an object which has two writeables and two readables?
+--------+ +----------+ +----
| r|-->|w r|-->|w
| socket | | protocol | | rest of app
| w|<--|r w|<--|r
+--------+ +----------+ +----
The problem with the diagram above is that the protocol object needs two separate read methods and two write methods.
Off the top of my head, I could make the protocol produce 'left' and 'right' duplex objects, or 'in' and 'out' duplex objects (to slice it a different way).
Are either of these the preferred way, or is there a better solution?
| app |
+---------------+
^ |
| V
+-----+ +-----+
| | | |
+----------| |-| |-+
| protocol | .up | |.down| |
+----------| |-| |-+
| | | |
+-----+ +-----+
^ |
| V
+---------------+
| socket |
My solution was to make a Protocol class, which created an Up Transform and a Down Transform.
The Protocol constructor passes a reference (to itself) when constructing the Up and Down transforms. The _transform method in each of the up and down transforms can then call push on itself, on the other Transform, or both as required. Common state can be kept in the Protocol object.
A duplex stream is like your diagram B, at least for the user. A more complete view of a stream would be to include producer(source) with the consumer(user). See my previous answer. Try not to think both read/write from a consumer point of view.
What you are doing is building a thin layer over the socket for protocol, so your design is correct :
-------+ +----------+ +------
r|---->| r|---->|
socket | | protocol | | rest of app
w|<----| w|<----|
-------+ +----------+ +------
You can use duplex or transform for the protocol part.
+---------+--------+---------+ +------------------+
| _write->| | |r | Transform -> |r
|-----------Duplex-----------| +------------------+
| | | <-_read |w | <- Transform |w
+---------+--------+---------+ +------------------+
process being your protocol related processing on incoming/outgoing data using internal _read, _write. Or you can transform streams. You would pipe protocol to socket and socket to protocol .

how is a memory barrier in linux kernel is used

There is an illustration in kernel source Documentation/memory-barriers.txt, like this:
CPU 1 CPU 2
======================= =======================
{ B = 7; X = 9; Y = 8; C = &Y }
STORE A = 1
STORE B = 2
<write barrier>
STORE C = &B LOAD X
STORE D = 4 LOAD C (gets &B)
LOAD *C (reads B)
Without intervention, CPU 2 may perceive the events on CPU 1 in some
effectively random order, despite the write barrier issued by CPU 1:
+-------+ : : : :
| | +------+ +-------+ | Sequence of update
| |------>| B=2 |----- --->| Y->8 | | of perception on
| | : +------+ \ +-------+ | CPU 2
| CPU 1 | : | A=1 | \ --->| C->&Y | V
| | +------+ | +-------+
| | wwwwwwwwwwwwwwww | : :
| | +------+ | : :
| | : | C=&B |--- | : : +-------+
| | : +------+ \ | +-------+ | |
| |------>| D=4 | ----------->| C->&B |------>| |
| | +------+ | +-------+ | |
+-------+ : : | : : | |
| : : | |
| : : | CPU 2 |
| +-------+ | |
Apparently incorrect ---> | | B->7 |------>| |
perception of B (!) | +-------+ | |
| : : | |
| +-------+ | |
The load of X holds ---> \ | X->9 |------>| |
up the maintenance \ +-------+ | |
of coherence of B ----->| B->2 | +-------+
+-------+
: :
I don't understand, since we have a write barrier, so, any store must take effect when C = &B is executed, which means whence B would equals 2. For CPU 2, B should have been 2 when it gets the value of C, which is &B, why would it perceive B as 7. I am really confused.
The key missing point is the mistaken assumption that for the sequence:
LOAD C (gets &B)
LOAD *C (reads B)
the first load has to precede the second load. A weakly ordered architectures can act "as if" the following happened:
LOAD B (reads B)
LOAD C (reads &B)
if( C!=&B )
LOAD *C
else
Congratulate self on having already loaded *C
The speculative "LOAD B" can happen, for example, because B was on the same cache line as some other variable of earlier interest or hardware prefetching grabbed it.
From the section of the document titled "WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?":
There is no guarantee that any of the memory accesses specified before a
memory barrier will be complete by the completion of a memory barrier
instruction; the barrier can be considered to draw a line in that CPU's
access queue that accesses of the appropriate type may not cross.
and
There is no guarantee that a CPU will see the correct order of effects
from a second CPU's accesses, even if the second CPU uses a memory
barrier, unless the first CPU also uses a matching memory barrier (see
the subsection on "SMP Barrier Pairing").
What memory barriers do (in a very simplified way, of course) is make sure neither the compiler nor in-CPU hardware perform any clever attempts at reordering load (or store) operations across a barrier, and that the CPU correctly perceives changes to the memory made by other parts of the system. This is necessary when the loads (or stores) carry additional meaning, like locking a lock before accessing whatever it is we're locking. In this case, letting the compiler/CPU make the accesses more efficient by reordering them is hazardous to the correct operation of our program.
When reading this document we need to keep two things in mind:
That a load means transmitting a value from memory (or cache) to a CPU register.
That unless the CPUs share the cache (or have no cache at all), it is possible for their cache systems to be momentarily our of sync.
Fact #2 is one of the reasons why one CPU can perceive the data differently from another. While cache systems are designed to provide good performance and coherence in the general case, but might need some help in specific cases like the ones illustrated in the document.
In general, like the document suggests, barriers in systems involving more than one CPU should be paired to force the system to synchronize the perception of both (or all participating) CPUs. Picture a situation in which one CPU completes loads or stores and the main memory is updated, but the new data had yet to be transmitted to the second CPU's cache, resulting in a lack of coherence across both CPUs.
I hope this helps. I'd suggest reading memory-barriers.txt again with this in mind and particularly the section titled "THE EFFECTS OF THE CPU CACHE".

Resources