How can I read bmv2 queueing state with P4 language? - p4-lang

I'm writing a test P4 language in which I prepare to read the queueing state of the software switch bmv2 .There is no relevant content in the P4 program specification.But I found a mail in the mail list which is:
Hi Wei,
There is no standard way to get a timestamp in P4, as you can see in the spec.
However if you are using the simple_switch bmv2 target, you can still have access to this information. You need to include the following in your P4 program:
header_type intrinsic_metadata_t {
fields {
ingress_global_timestamp : 48;
}
}
metadata intrinsic_metadata_t intrinsic_metadata;
header_type queueing_metadata_t {
fields {
enq_timestamp: 48;
enq_qdepth: 16;
deq_timedelta: 32;
deq_qdepth: 16;
}
}
metadata queueing_metadata_t queueing_metadata;
All timestamps are in microseconds
intrinsic_metadata.ingress_global_timestamp is the timestamp when the switch starts processing the packet queueing_metadata.enq_timestamp is the timestamp when the packet is enqueued (between the ingress and egress pipelines)
queueing_metadata.deq_timedelta is the time the packet spent in the queue (I believe it is what you are after).
It is important to understand that these metadata fields are specific to the simple_switch target, they are not standardized by P4. bmv2 will detect that they are defined in the P4 program and leverage them.
Because bmv2 has a low throughput, I recommend rate-limiting your links to < 100Mbps. You can also use the "set_queue_rate" simple_switch CLI command to rate-limit the bmv2 queue. Please make sure that you compile bmv2 with O2 and without logging (./configure 'CXXFLAGS=-O2' --disable-logging-macros --disable-elogger), otherwise throughput will be really bad.
Best,
Antonin
I add the code mentioned in the mail to test whether it can work:
modify_field(ipv4.ttl,10);
add_to_field(ipv4.ttl,queueing_metadata.deq_qdepth);
But the result is It don't work,what should I do?Appreciate any help,thk.

I asked the same question at the p4lang repository and the problem solved , thanks for the author's help , as you can see here .
By the way , the runtime command set_queue_rate is simple_switch specific .

Related

Using Logstash Aggregate Filter plugin to process data which may or may not be sequenced

Hello all!
I am trying to use the Aggregate filter plugin of Logstash v7.7 to correlate and combine data from two different CSV file inputs which represent API data calls. The idea is to produce a record showing a combined picture. As you can expect the data may or may not arrive in the right sequence.
Here is as an example:
/data/incoming/source_1/*.csv
StartTime, AckTime, Operation, RefData1, RefData2, OpSpecificData1
231313232,44343545,Register,ref-data-1a,ref-data-2a,op-specific-data-1
979898999,75758383,Register,ref-data-1b,ref-data-2b,op-specific-data-2
354656466,98554321,Cancel,ref-data-1c,ref-data-2c,op-specific-data-2
/data/incoming/source_1/*.csv
FinishTime,Operation,RefData1, RefData2, FinishSpecificData
67657657575,Cancel,ref-data-1c,ref-data-2c,FinishSpecific-Data-1
68445590877,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
55443444313,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
I have a single pipeline that is receiving both these CSVs and I am able to process and write them as individual records to a single Index. However, the idea is to combine records from the two sources into one record each representing a superset. of Operation related information
Unfortunately, despite several attempts I have been unable to figure out how to achieve this via Aggregate filter plugin. My primary question is whether this is a suitable use of the specific plugin? And if so, any suggestions would be welcome!
At the moment, I have this
input {
file {
path => ['/data/incoming/source_1/*.csv']
tags => ["source1"]
}
file {
path => ['/data/incoming/source_2/*.csv']
tags => ["source2"]
}
# use the tags to do some source 1 and 2 related massaging, calculations, etc
aggregate {
task_id = "%{Operation}_%{RefData1}_%{RefData1}"
code => "
map['source_files'] ||= []
map['source_files'] << {'source_file', event.get('path') }
"
push_map_as_event_on_timeout => true
timeout => 600 #assuming this is the most far apart they will arrive
}
...
}
output {
elastic { ...}
}
And other such variations. However, I keep getting individual records being written to the Index and am unable to get one combined. Yet again, as you can see from the data set there's no guarantee of the sequencing of records - so I am wondering if the filter is the right tool for the job, to begin with? :-\
Or is it just me not being able to use it right! ;-)
In either case, any inputs/ comments/ suggestions welcome. Thanks!
PS: This message is being cross-posted over from Elastic forums. I am providing a link there just in case some answers pop up there too.
The answer is to use Elastic search in upsert mode. Please see the specifics here..
I recommend first that the information reaches you in order so that the filter can take it better, secondly, you could set the options in your pipeline.yml: pipeline.workers: 1 and pipeline.ordered: true, thus guaranteeing the order of processing.

'Delay until' finish time of 'Queue a new build' not working in Azure Logic App

I'm triggering an Azure Logic App from an https webhook for a docker image in Azure Container Registry.
The workflow is roughly:
When a HTTP request is received
Queue a new build
Delay until
FinishTime of Queue a new build
See: Workflow image
The Delay until action doesn't work in that the queueried FinishTime is 0001-01-01T00:00:00.
It complains about the wrong format, so I manually added a Z after the FinishTime keyword.
Now the time stamp is in the right format, however, the timestamp 0001-01-01T00:00:00Z obviously doesn't make sense and subsequent steps are executed without delay.
Anything that I am missing?
edit: Queue a new build queues an Azure pipeline build. I.e. the FinishTime property comes from the pipeline.
You need to set a timestamp in future, the timestamp 0001-01-01T00:00:00Z you set to the "Delay until" action is not a future time. If you set a timestamp as 2020-04-02T07:30:00Z, the "Delay until" action will take effect.
Update:
I don't think the "Delay until" can do what you expect, but maybe you can refer to the operations below. Just add a "Condition" action to judge if the FinishTime is greater than current time.
The expression in the "Condition" is:
sub(ticks(variables('FinishTime')), ticks(utcNow()))
In a word, if the FinishTime is greater than current time --> do the "Delay until" aciton. If the FinishTime is less than current time --> do anything else which you want.(By the way you need to pay attention to the time zone of your timestamp, maybe you need to convert all of the time zone to UTC)
I've been in touch with an Azure support engineer, who has confirmed that the Delay until action should work as I intended to use it, however, that the FinishTime property will not hold a value that I can use.
In the meantime, I have found a workaround, where I'm using some logic and quite a few additional steps. Inconvenient but at least it does what I want.
Here are the most important steps that are executed after the workflow gets triggered from a webhook (docker base image update in Azure Container Registry).
Essentially, I'm initializing the following variables and queing a new build:
buildStatusCompleted: String value containing the target value completed
jarsBuildStatus: String value containing the initial value notStarted
jarsBuildResult: String value containing the default value failed
Then, I'm using an Until action to monitor when the jarsBuildStatus's value is switching to completed.
In the Until action, I'm repeating the following steps until jarsBuildStatus changes its value to buildStatusCompleted:
Delay for 15 seconds
HTTP request to Azure DevOps build, authenticating with personal access token
Parse JSON body of previous raw HTTP output for status and result keywords
Set jarsBuildStatus = status
After breaking out of the Until action (loop), the jarsBuildResult is set to the parsed result.
All these steps are part of a larger build orchestration workflow, where I'm repeating the given steps multiple times for several different Azure DevOps build pipelines.
The final action in the workflow is sending all the status, result and other relevant data as a build summary to Azure DevOps.
To me, this is only a workaround and I'll leave this question open to see if others have suggestions as well or in case the Azure support engineers can give more insight into the Delay until action.
Here's an image of the final workflow (at least, the part where I implemented the Delay until action):
edit: Turns out, I can simplify the workflow because there's a dedicated Azure DevOps action in the Logic App called Send an HTTP request to Azure DevOps, which omits the need for manual authentication (Azure support engineer pointed this out).
The workflow now looks like this:
That is, I can query the build status directly and set the jarsBuildStatus as
#{body('Send_an_HTTP_request_to_Azure_DevOps:_jar''s')['status']}
The code snippet above is automagically converted to a value for the Set variable action. Thus, no need to use an additional Parse JSON action.

Errbit keeps spamming emails

im using errbit 0-3 stable and its working really good .
but the problem is sometimes it start spamming me emails for the same error but different hashes like the following :
Mongo::Error::NoServerAvailable: No server is available matching preference: #<Mongo::ServerSelector::Primary:0x007fdba42891f0 #tag_sets=[], #options={:database=>"db_test", :max_pool_size=>200, :wait_queue_timeout=>5, :write=>{"w"=>0}}, #server_selection_timeout=30>
Mongo::Error::NoServerAvailable: No server is available matching preference: #<Mongo::ServerSelector::Primary:0x007fdbb8148e30 #tag_sets=[], #options={:database=>"db_test", :max_pool_size=>200, :wait_queue_timeout=>5, :write=>{"w"=>0}}, #server_selection_timeout=30>
How can i filter them so it would group them into 1 error only ?
There's two ways to deal with this.
Option 1) Catch the errors in your application and scrub the uniqueness out of the error messages before sending them to Errbit.
Option 2) Errbit supports configurable "fingerprinting" so you can actually tell Errbit what attributes contribute to the uniqueness of error notifications. This can be done system-wide or on individual Errbit apps. In your case, you could toggle off the error message as part of the Error fingerprint.
From the Errbit README:
The way Errbit arranges notices into error groups is configurable. By
default, Errbit uses the notice's error class, error message, complete
backtrace, component (or controller), action and environment name to
generate a unique fingerprint for every notice. Notices with identical
fingerprints appear in the UI as different occurences of the same
error and notices with differing fingerprints are displayed as
separate errors.
Changing the fingerprinter (under the 'config' menu) applies to all
apps and the change affects only notices that arrive after the change.
If you want to refingerprint old notices, you can run rake
errbit:notice_refingerprint.

Spring Integration:Is there a way to aggregate from "all" messages in a channel?

I want to write a batch which reads website access log files(csv file)from a path every day and do some analysis using spring integration.
this is the simplified version of the input csv file.
srcIp1,t1,path1
srcIp2,t2,path2
srcIp1,t3,path2
srcIp1,t4,path1
The access number per source ip and path is to be calculated after some filtering logic.
I made a input channel whose payload is the parsed log line,and a filter is applied,and finally an aggregator to calculate the final result.
The problem is what should be the right group release stragety,the default release stragety(SequenceSizeReleaseStrategy) does not work.
Also any of other spring integraion out of box release
strategies(ExpressionEvaluatingReleaseStrategy,
MessageCountReleaseStrategy, MethodInvokingReleaseStrategy,
SequenceSizeReleaseStrategy, TimeoutCountSequenceSizeReleaseStrategy)
does not seem to fit my needs.
Or Spring integration assumed that a channel carries a message stream where there is no concept of "ending of message" and is not suitalbe for my problem here ?
You can write a custom ReleaseStrategy if you have some way to tell when the group is complete. It is consulted each time a message is added to the group.
Or, you can use a group-timeout to release a partial group after some time when no messages arrived.

When can we get access to nest autoaway status?

Let me stress that I am not a programmer but I like messing around with things. I've been using #ifttt and #nest for years and recently started using #smartthings to do cool things in my house.
I wanted to power off devices such as my lights and water heater based on leaving my house. Rather than having this depend on one device such as a phone or keyfoob, I wanted to use the nest "auto-away" feature.
Auto-away doesn't appear to be exposed to #ifttt or #smartthings. I've asked #nestsupport and they told me to come here :-o.
Does anyone from nest developer team know when developers and other products will be able to consume this from he nest device? Its a real shame that after several years this isn't exposed yet. Not only that but it could be an additional selling point to integrate and turn on/off items in your house.
Thank
I'm not from the Nest developer team, but I've played around with the Nest API in the past, and use it to plot my energy usage.
The 'auto away' information is already accessible in the API, and looks to be used in a number of IFTTT recipes:
https://ifttt.com/recipes/search?q=auto+away&ac=false
Within the (JSON) data received back in the API, the 'auto away' status is accessible via;
shared->{serial_number}->auto_away
This is set as a boolean (0 or 1).
If you like messing around with code, and know any PHP, then this PHP class for the Nest API is very useful at grabbing all information etc;
https://github.com/gboudreau/nest-api
Auto-Away is and always has been readable https://developer.nest.com/documentation/cloud/api-overview#away
There are a few ways you could go about doing this, but if you're writing up a SmartApp just for your own uses, I'd suggest piggybacking off of one of the existing device types for the Nest on SmartThings. As a quick example, I'll use the one that I use:
https://github.com/bmmiller/device-type.nest/blob/master/nest.devicetype.groovy
After line 96, this is to expose the status to any SmartApp you may write:
attribute "temperatureUnit", "string"
attribute "humiditySetpoint", "number"
attribute "autoAwayStatus", "number" // New Line
Now, you'll want to take care of getting the data in the existing poll() method, currently starting at line 459.
After line 480, to update the attribute
sendEvent(name: 'humidity', value: humidity)
sendEvent(name: 'humiditySetpoint', value: humiditySetpoint, unit: Humidity)
sendEvent(name: 'thermostatFanMode', value: fanMode)
sendEvent(name: 'thermostatMode', value: temperatureType)
sendEvent(name: 'autoAwayStatus', value: data.shared.auto_away) // New Line
This will expose a numerical value for the auto_away status.
-1 = Auto Away Not Enabled
0 = Auto Away Off
1 = Auto Away On
Then, in your SmartApp you write, where you include an input of type thermostat like this:
section("Choose thermostat... ") {
input "thermostat", "capability.thermostat"
}
You will be able to access the Auto Away status by referring to
thermostat.autoAwayStatus
From anywhere in your code where you can do something like
if (thermostat.autoAwayStatus == 1) {
// Turn off everything
}

Resources