Coap blockwise transfer: How to map following blocks to initial block/request - coap

Let's say I have a client that wants to send two large requests to the server (at the same time).
Let's say the first payload is "ABCD" and the second payload is "WXYZ".
First block of first request has messageID=1 and token=0x1 with payload "AB",
First block of second request has messageID=2 and token=0x2 with payload "WX",
Second block of first request has messageID=3 and token=0x3 with payload "CD",
Second block of second request has messageID=4 and token=0x4 with payload "YZ".
You can see where I'm going with this. If messageID and token are different for each request, and they don't follow in consecutive order, how is the server supposed to concatenate the correct blocks?
Here's a sequence diagram:
CLIENT SERVER
| |
| CON [MID=1,TOK=1], POST, /foo, 1:0/1/128, "AB" ------> |
| |
| <------ ACK [MID=1,TOK=1], 2.31 Continue, 1:0/1/128 |
| |
| CON [MID=2,TOK=2], POST, /foo, 1:0/1/128, "WX" ------> |
| |
| <------ ACK [MID=2,TOK=2], 2.31 Continue, 1:0/1/128 |
| |
| CON [MID=3,TOK=3], POST, /foo, 1:1/0/128, "CD" ------> |
| |
| <------ ACK [MID=3,TOK=3], 2.01 Created, 1:1/0/128 |
| |
| CON [MID=4,TOK=4], POST, /foo, 1:1/0/128, "YZ" ------> |
| |
| <------ ACK [MID=4,TOK=4], 2.01 Created, 1:1/0/128 |
The problem occurs on message 3: The server now has two incomplete payloads, how can it reliably map the third request to the correct payload? How does it know that the payload is supposed to be "ABCD" instead of "WXCD"?
The specification for blockwise transfer only states the following:
As a general comment on tokens, there is no other mention of tokens
in this document, as block-wise transfers handle tokens like any
other CoAP exchange. As usual the client is free to choose tokens
for each exchange as it likes.

You are right, in fact the block-wise specs highlight it and propose a workaround (for the block2 option only):
The Block2 option provides no way for a single endpoint to perform
multiple concurrently proceeding block-wise response payload transfer
(e.g., GET) operations to the same resource. This is rarely a
requirement, but as a workaround, a client may vary the cache key
(e.g., by using one of several URIs accessing resources with the same
semantics, or by varying a proxy-safe elective option).
and:
The Block1 option provides no way for a single endpoint to perform
multiple concurrently proceeding block-wise request payload transfer
(e.g., PUT or POST) operations to the same resource. Starting a new
block-wise sequence of requests to the same resource (before an old
sequence from the same endpoint was finished) simply overwrites the
context the server may still be keeping. (This is probably exactly
what one wants in this case - the client may simply have restarted
and lost its knowledge of the previous sequence.)

Related

Transcribing Splunk's "transaction" Command into Azure Log Analytics / Azure Data Analytics / Kusto

We're using AKS and have our container logs writing to Log Analytics. We have an application that emits several print statements in the container log per request, and we'd like to group all of those events/log lines into aggregate events, one event per incoming request, so it's easier for us to find lines of interest. So, for example, if the request started with the line "GET /my/app" and then later the application printed something about an access check, we want to be able to search through all the log lines for that request with something like | where LogEntry contains "GET /my/app" and LogEntry contains "access_check".
I'm used to queries with Splunk. Over there, this type of inquiry would be a cinch to handle with the transaction command:
But, with Log Analytics, it seems like multiple commands are needed to pull this off. Seems like I need to use extend with row_window_session in order to give all the related log lines a common timestamp, then summarize with make_list to group the lines of log output together into a JSON blob, then finally parse_json and strcat_array to assemble the lines into a newline-separated string.
Something like this:
ContainerLog
| sort by TimeGenerated asc
| extend RequestStarted= row_window_session(TimeGenerated, 30s, 2s, ContainerID != prev(ContainerID))
| summarize logLines = make_list(LogEntry) by RequestStarted
| extend parsedLogLines = strcat_array(parse_json(logLines), "\n")
| where parsedLogLines contains "GET /my/app" and parsedLogLines contains "access_check"
| project Timestamp=RequestStarted, LogEntry=parsedLogLines
Is there a better/faster/more straightforward way to be able to group multiple lines for the same request together into one event and then perform a search across the contents of that event?
After reading your question, there is no such an easy way to do that in azure log analytics.
If the logs are in this format, you need to do some other work to meet your requirement.

How to describe a scenario in gherkin retrieving an Access Token in Given clause

My question is much more conceptual than ever. I'd like to describe a good scenario using the Cucumber feature file where I have to have for each row of my Data table a new access token from the Identity Provider.
I.e
Scenario:
Given <Code Authorization>
And <Access Token>
And The client has the following information
| email | FirstName | Phone |
| xpto# | Richard | 343242|
When the client via Post /xpto
Then The API response a Json file
| code | response |
| 200 | xpto |
I'll use a Data Table for this kind of approach. However, I cannot give a static Access Token because it will expire. I should get a new one every time when my test run but It is not my test it self. The Token is just a Data that I have to have to test my scenario.
Is it ok call a REST in an Given Step? If I do this I am mixing up the objective of my scenarios.
Any thougts are welcome not for your mind but by the book. :-)
Kind Regards,
It seems that you need the token in order to set up the scenario. In that case it is fine to have that in a Given step. You can perform REST or other calls in step definitions for that Given step. For ex: It may look something like below. You can change wordings as you like but try to word it in a manner that shows initial state of the application.
Given I have a token for this scenario
And The client has the following information
| email | FirstName | Phone |
|xpto# | Richard | 343242|
...
...
Given steps are meant to establish a given state. It is considered best practice in BDD. You can find this information in official BDD docs here
Also , if you want to read more about the purpose and structure of Given , When and Then , be sure to have a look here

getting duplicates records when joining in kql

We have a requirement to get status of windows service when it is started and stopped do that I have returned one query, but I am facing issue when joining 2 tables to get output.
I have tried using inner and left outer joins but still getting duplicates
Event
| where EventLog == "System" and EventID == 7036 and Source == "Service Control Manager"
| parse kind=relaxed EventData with * '<Data Name="param1">' Windows_Service_Name '</Data><Data Name="param2">' Windows_Service_State '</Data>' *
| where Windows_Service_State == "running" and Windows_Service_Name == "Microsoft Monitoring Agent Azure VM Extension Heartbeat Service"
| extend startedtime = TimeGenerated
| join (
Event
| where EventLog == "System" and EventID == 7036 and Source == "Service Control Manager"
| parse kind=relaxed EventData with * '<Data Name="param1">' Windows_Service_Name '</Data><Data Name="param2">' Windows_Service_State '</Data>' *
| where Windows_Service_State == "stopped" and Windows_Service_Name == "Microsoft Monitoring Agent Azure VM Extension Heartbeat Service"
| extend stoppedtime = TimeGenerated
) on Computer
| extend downtime = startedtime - stoppedtime
| project Computer, Windows_Service_Name,stoppedtime , startedtime ,downtime
| top 10 by Windows_Service_Name desc
we want to get no of times that service started and stopped if the service restarted multiple times in a day we are getting duplicate timings in starttime when joining please have a look on link (https://ibb.co/JzqxjC0)
I am not sure I fully understand what is going on, since I don't have access to the data. But. I can see you are using the default join flavor.
The default is inner unique:
The inner-join function is like the standard inner-join from the SQL world. An output record is produced whenever a record on the left side has the same join key as the record on the right side.
Which means a new line in the result is created on every match between the left and the right side. Therefore. let's assume you have a computer that was restarted twice, so it has 2 lines of stopped, and 2 lines of running. That will produce 4 rows in Kusto answer.
Looking at your picture, it makes sense to me because you have lines with negative downtime. I guess that is not possible.
What I would do, is look for an identifier that is unique on every Computer run. Then you can join on that, and stay safe not to generate data that you don't want.

#After is invoked multiple times at the end of Scenario Outline in Cucumber

My cucumber Gherkins look like this:
Feature: Story XYZ- Title of Story
"""
Title: Story XYZ- Title of Story
"""
Background:
Given Appointments are being created using "SOAP" API
#scenario1
Scenario Outline: Create an appointment for a New start service order
Given Appointment does not exist for order id "Test_PR_Order123"
When Request for create appointment is received for order "Test_PR_Order123" and following
| FieldName | FieldValue |
| ServiceGroupName | <ServiceGroupName> |
| SerivceGroupID | TestSG123 |
| ServiceType | <ServiceType> |
Then Appointment ID should be created
And Appointment for order "Test_PR_Order123" should have following details
| FieldName | FieldValue |
| ServiceGroupName | <ServiceGroupName> |
| SerivceGroupID | TestSG123 |
| ServiceType | <ServiceType> |
And Appointment history should exist for "Create Appointment"
Examples:
| ServiceType | ServiceGroupName |
| Service1 | ServiceGroup1 |
| Service2 | ServiceGroup2 |
#scenario22
Scenario Outline: Create an appointment for a Change Service order
Given Appointment does not exist for order id "Test_CH_Order123"
When Request for create appointment is received for order "Test_CH_Order123" and following
| FieldName | FieldValue |
| ServiceGroupName | <ServiceGroupName> |
| SerivceGroupID | TestSG123 |
| ServiceType | <ServiceType> |
Then Appointment ID should be created
And Appointment for order "Test_CH_Order123" should have following details
| FieldName | FieldValue |
| ServiceGroupName | <ServiceGroupName> |
| SerivceGroupID | TestSG123 |
| ServiceType | <ServiceType> |
And Appointment history should exist for "Create Appointment"
Examples:
| ServiceType | ServiceGroupName |
| Service1 | ServiceGroup1 |
| Service2 | ServiceGroup2 |
In above feature there is a background which will execute for each example in both Scenario Outline.
Also, in java implementation we have implemented #After and #Before hooks which will also execute for each example.
We are using spring-cucumber for data injection between steps.
Problem occurs when all examples in first scenario outline ends, #After implemented method is invoked 2 times. When 2nd time #After starts at the same time 2nd Scenario Outline examples start executing.
As a result sequential execution of scenarios is disturbed and automation start to fail.
Please suggest if this is a bug in cucumber or we are missing anything.
One of the many things you are missing is keeping scenarios simple. By using a scenario outlines and by embedding so many technical details in your Gherkin you are making things much harder for yourself. In addition you are using before and after hooks to make this work.
Another problem is that your scenarios do not make sense. They are all about making appointments for orders, but your don't at any point create the order.
Finally you have two identical scenarios that you say do different things. The first is for New, the second is for Change. There has to be some difference otherwise you would not need the second scenario.
What I would do is try and extract a single scenario out of this tangle and use that to diagnose any problems. You should be able to end up with something like
Scenario: Create an appointment for an order
Given there is an order
And appointments are made using SOAP
When a new start appointment is made for the order
Then the appointment should be created
And the appointment should have a history
There should be no reason why you can't make this scenario work without any #before or #after. When you have this working then create other scenarios whatever other cases you are trying to examine. Again you should be able to do this without doing any of the following
Using example data to differentiate between scenarios
Using outlines
Using #before and #after
When using Cucumber you want to push the complexity of automation outside of Cucumber. Either pull it up to script before Cucumber starts, or push it down to execute in helper methods that are called in a single step definition. If you keep the complexity in Cucumber and in particular try and link scenarios to each other and use #before and #after to keep state between scenarios you will not have a pleasant time using (misusing) Cucumber.
Its far more likely that your problems are caused by your code than by Cucumbers. Even if Cucumber did have a problem with outlines and hooks, you can fix your problems by just not using them. Outlines are completely unnecessary and hooks are mostly misused.

Gherkin/Cucumber: How can we use the same step definition and pass optional parameters from feature file

I am using Cucumber plugin in RestAssured to write my feature file and automate the REST service. Below is how my scenario looks like
Scenario Outline: Validate the elements in the GET response
Given I have the data setup to test "<version>" and "<order>"
When ...
Then the response should contain accurate data
Examples:
| version | order |
| V1 | O1 |
| V2 | O2 |
My step definition has the below signature for the method:
#Given("^I have the data setup to test \"([^\"]*)\" and \"([^\"]*)\"$")
public void iHaveTheDataSetupToTestAnd(String clientCharacteristicTypeCd, String clientCharacteristicDataType)
My question is I want to have another scenario like this below , where I want to leverage the same above step definition , but pass an additional parameter "special order" as optional.
Can I do that or do I need to create a new step deifnition for just the below Given step ? I was thinking of method overloading / Passing Optional parameter but not sure if it works in Gherkin. Something like this
Ex:
Scenario Outline: Validate the elements in the GET response for special order
Given I have the data setup to test "<version>" and "<order>" and "<specialorder>"
When ...
Then the response should contain accurate data
Examples:
| version | order | specialorder
| V1 | O1 | SO1
| V2 | O2 | SO2
public void iHaveTheDataSetupToTestAnd(String clientCharacteristicTypeCd, String clientCharacteristicDataType, String specialOrder)
What you want is to delegate to a helper where you support both steps. Then implement two steps that don't do more than catch the call and forward it to the helper.
This will allow you to do all kinds of interesting things without having lots of logic in the step class. Each step is often just one or two lines in my case, I catch the parameters and forward them to a helper where all the interesting things in driving the system under test happens.

Resources