sending 2 CCR same time using seagull tool - diameter-protocol

i am trying to use seagull tool to simulate diameter message.
i want flow like that. after diameter server send CCAi, seagull client will send 2 CCRu after that.
CER
CER
CCRi
CCAi
CCRu
CCRu
Seagull Ro Server
(Ro Client)
start
| |
|---- CCR (INITIAL) --->|
| |
|<--- CCA --------------|
| |
|---> CCR (UPDATE) ---->|
| |
|---- CCR (UPDATE) ---->|
| |
|<--- CCA --------------|
| |
|---- CCR (TERMINATE) ->|
| |
|<--- CCA --------------|
| |
end

Related

How to perform a series of steps in a single thread, with an async flow in spring-integration?

I currently have a spring-integration (v4.3.24) flow that looks like the following:
|
| list of
| filepaths
+----v---+
|splitter|
+----+---+
| filepath
|
+----------v----------+
|sftp-outbound-gateway|
| "get" |
+----------+----------+
| file
+---------------------+
| +----v----+ |
| |decryptor| |
| +----+----+ |
| | |
| +-----v------+ | set of transformers
| |decompressor| | (with routers before them
| +-----+------+ | because some steps are optional)
| | | that process the file;
| +--v--+ | call this "FileProcessor"
| | ... | |
| +--+--+ |
+---------------------+
|
+----v----+
|save file|
| to disk |
+----+----+
|
All of the channels above are DirectChannels - Yup, I know this is a poor structure. This was working fine for files in small numbers. But now, I have to deal with thousands of files which need to go through the same flow - benchmarks reveal that this takes ~ 1 day to finish processing. So, I'm planning to introduce some parallel processing to this flow. I want to modify my flow to achieve something like this:
|
|
+----------v----------+
|sftp-outbound-gateway|
| "mget" |
+----------+----------+
| list of files
|
+----v---+
|splitter|
+----+---+
one thread one | thread ...
+------------------------+---------------+--+--+--+--+
| file | file | | | | |
+---------------------+ +---------------------+
| +----v----+ | | +----v----+ |
| |decryptor| | | |decryptor| |
| +----+----+ | | +----+----+ |
| | | | | |
| +-----v------+ | | +-----v------+ | ...
| |decompressor| | | |decompressor| |
| +-----+------+ | | +-----+------+ |
| | | | | |
| +--v--+ | | +--v--+ |
| | ... | | | | ... | |
| +--+--+ | | +--+--+ |
+---------------------+ +---------------------+
| |
+----v----+ +----v----+
|save file| |save file|
| to disk | | to disk |
+----+----+ +----+----+
| |
| |
For parallel processing, I output the files from the splitter on to a ExecutorChannel with a ThreadPoolTaskExecutor.
Some of the questions that I have:
I want all of the "FileProcessor" steps for one file to happen on the same thread, while multiple files are processed in parallel. How can I achieve this?
I saw from this answer, that a ExecutorChannel to MessageHandlerChain flow would offer such functionality. But, some of the steps inside "FileProcessor" are optional (using selector-expression with routers to skip some of the steps) - ruling out using a MessageHandlerChain. I can rig up a couple of MessageHandlerChains with Filters inside, but this more or less becomes the approach mentioned in #2.
If #1 cannot be achieved, will changing all of the channel types starting from the splitter, from DirectChannel to ExecutorChannel help in introducing some parallelism? If yes, should I create a new TaskExecutor for each channel or can I reuse one TaskExecutor bean for all channels (I cannot set scope="prototype" on a TaskExecutor bean)?
In your opinion, which approach (#1 or #2) is better? Why?
If I perform global error handling, like the approach mentioned here, will the other files continue to process even if one file errors out?
It will work as you need by using an ExecutorChannel as an input to the decrypter and leave all the rest as direct channels; the remaining flow does not have to be a chain, each component will run on one of the executor's threads.
You will need to be sure all your downstream components are thread-safe.
Error handling should remain as is; each sub flow is independent.

Cucumber: nested scenario outline cycles

Scenario to automate:
Given <precondition> was fulfilled
And <user> is authorized
When user requests <endpoint>
Then user should receive <code> response
Test data matrix:
| precondition | endpoint | user1 | user 2 | ....
| | /users | OK | Not Found |
| | /roles | OK | OK |
| | /create_user | OK | OK |
| object user exists | /update_user | OK | OK |
| object user exists | /delete_user | OK | OK |
| | /create_data_role | OK | Not Found |
| data role exists | /update_data_role | OK | Not Found |
....
There's around 20 users with different role combination and around 20 endpoints.
Need to verify each endpoint for each user - so it should be a nested cycle.
How do I do it?
Don't do this in Cucumber - reasons
1) You get no benefit from putting all these routes and conditions in Gherkin. Nobody can read them and make sense of them especially if you trying something combinatorial
2) Cuke scenarios run slowly, and you want to run lots of them, you could dramatically reduce your run time by writing a fast unit test instead.
3) If you write this test in code you can write it much more elegantly than you can in Gherkin.
4) Dealing with errors is painful (as you've already pointed out)
You are using the wrong tool for this particular job, use something else.
Yet I come up with this option, but it doesn't follow gherkin convention because When and Then steps a jammed in one
1. Preconditions moved to #Before hook
2. Scenario
Given <user> is authorized
Then <user> requests functionality appropriate response code should be received
| ENDPOINT | USER1 | USER2|
| /users | 200 | 404 |
| /create_user | 200 | 404 |
| /update_user | 200 | 404 |
Examples:
| username |
| USER1 |
| USER2 |
It's also inconvenient because when tests failed it takes time to identify faulty endpoint(S)

How to code two-way duplex streams in NodeJS

In the latest few versions of NodeJS (v0.10.X as of writing), the Streams API has undergone a welcome redesign and I would like to start using it now.
I want to wrap both the input and output of a socket with an object which implements a protocol.
The so-called Duplex interface, seems to just be any stream which is readable and writable (like a socket).
It is not clear whether Duplexes should be like A or B, or whether it doesn't matter.
+---+ +---+
-->| A |--> | |-->
+---+ | B |
| |<--
+---+
What is the correct code-structure/interface for an object which has two writeables and two readables?
+--------+ +----------+ +----
| r|-->|w r|-->|w
| socket | | protocol | | rest of app
| w|<--|r w|<--|r
+--------+ +----------+ +----
The problem with the diagram above is that the protocol object needs two separate read methods and two write methods.
Off the top of my head, I could make the protocol produce 'left' and 'right' duplex objects, or 'in' and 'out' duplex objects (to slice it a different way).
Are either of these the preferred way, or is there a better solution?
| app |
+---------------+
^ |
| V
+-----+ +-----+
| | | |
+----------| |-| |-+
| protocol | .up | |.down| |
+----------| |-| |-+
| | | |
+-----+ +-----+
^ |
| V
+---------------+
| socket |
My solution was to make a Protocol class, which created an Up Transform and a Down Transform.
The Protocol constructor passes a reference (to itself) when constructing the Up and Down transforms. The _transform method in each of the up and down transforms can then call push on itself, on the other Transform, or both as required. Common state can be kept in the Protocol object.
A duplex stream is like your diagram B, at least for the user. A more complete view of a stream would be to include producer(source) with the consumer(user). See my previous answer. Try not to think both read/write from a consumer point of view.
What you are doing is building a thin layer over the socket for protocol, so your design is correct :
-------+ +----------+ +------
r|---->| r|---->|
socket | | protocol | | rest of app
w|<----| w|<----|
-------+ +----------+ +------
You can use duplex or transform for the protocol part.
+---------+--------+---------+ +------------------+
| _write->| | |r | Transform -> |r
|-----------Duplex-----------| +------------------+
| | | <-_read |w | <- Transform |w
+---------+--------+---------+ +------------------+
process being your protocol related processing on incoming/outgoing data using internal _read, _write. Or you can transform streams. You would pipe protocol to socket and socket to protocol .

i2c: half amplitude on SDA

I try to detect a rtc device on an i2c bus, with the i2cdetect utility from the i2ctools package. As I cannot see anything when scanning the i2c bus, I use an oscilloscope and on the 9th SCL's rising edge (ACK bit), I get a half amplitude on the SDA signal.
Further details:
What voltage are your measuring normally, and what in the unusual case?
normal voltage: 0V for a 0 logical, 3.3V for a 1 logical. In the unusual case I measure 1.4V (almost the half voltage of an 1 logical in the normal case)
Do you have a pullup resistor and open collector drivers the way you should?
Yes, the SDA and SCL lines are pulled up with a 4k7 resistor and the rtc device is configured as open drain (CMOS).
Do both the master and slave operate at the same voltage?
Yes, at 3.3V
#Martin Thompson: Thanks. So, here is the schematic of my i2c bus (actually, only a rtc device is connected on).
3.3V 3.3V 3.3V 3.3V
____ ____ ____ ____
| | | |
| +--+ +--+ |
| 4k7 | | 4k7 | | |
+-----------------+ | | | | +--------------+
| | +--+ +--+ | |
| FPGA Cyclone 4 | | | | Real time |
| (Altera) | | | | clock |
| GPIO | ---SDA----------------+-------------------|-------------| ST m41t83 |
| | | | |
| | ---SCL------------------------------------+-------------| |
| | | |
| | | |
+-----------------+ +--------------+
| |
| |
_____ ______
0V 0V
and this is the screenshot of the oscilloscope, which picks up the SDA and SCL signals:
SDA
________ ______________ _________________________________ __ ___________
| | | | | | | | |
| | | | | | | | |
| | | | | _____| | | |
| | | | | | | | |
|______| |_______| |________________________________| |_____| |_______________
SCL
____________ ___ ___ ___ ___ ___ ___ ___ ___ ___ _______________________________
| | | | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | | | |
|___| |___| |___| |___| |_______________________________| |___| |___| |___| |___| |___|
These signals are obtained when I send a request to my rtc device at address 0x68 with the i2cdetect utility (i2cdetect 0 0x68 0x68) from a linux shell.
By the way, excuse me for the poor ascii design, but as I am new on the forum I cannot post images. Hope it will be understandable ;-)
Clarification about the "screenshot": the SCL's and SDA's amplitude is about 3.5V and on the unusual case (on the 9th scl's rising edge) it is 1.4V
I got the same issue and it was related to the SDA/SCL IOs configuration on the master (open drain shall be used, instead of push pull).

Show object creation (from DAL) in UML sequence chart

I have 3 classes: Controller, DAL and Entity. The controller calls the DAL requesting an Entity. The DAL retrieves the entity data from the DB and creates a new Entity class which is then returned to the Controller. How do I show this on a UML sequece chart (no need to show the DB)?
2nd question: how should we share UML diagrams on SO? :)
Thanks in advance
Controller DAL DB
| | |
| get entity | |
|----------->| get entity data |
| |---------------->|
| |< - - - - - - - -|
| | |
| |-- |
| | |create entity |
| |<- |
|<- - - - - -| |
note: "Create entity" is a "self-message", so it starts from DAL's lifeline and goes back into DAL's lifeline. I just can't draw it better with characters. Forward messages are continous line, reply messages are dashed line.
EDIT: reflecting on comment, you can also show Entity's lifeline, if it's important.
Controller DAL DB
| | |
| get entity | |
|----------->| get entity data |
| |---------------->|
| |< - - - - - - - -|
| | |
| entity |---->Entity |
|<- - - - - -| | |
| | | |
It's useful if you want to show other calls to Entity as well.

Resources