OpenDaylight Flow statistics using RPC call - rpc

I'm trying to get statistics using the following RPC call and not via the default statistics-manager.
POST / restconf / operations / opendaylight - flow - statistics: get - all - flows - statistics - from - all - flow - tables {
"input": {
"node": "/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id=\"openflow:1000\"]"
}
}
However, the response of this request is just transaction-id. While I can see the OpenFlow Flow Stat Request and Flow Stat Reply messages are exchanged between the controller and the switch, the operational datastore seems not to be updated as a result of calling the above RPC. I check the operational datastore using:
GET /restconf/operational/opendaylight-inventory:nodes/node/openflow:1000/table/0
My question is:
How can I get the flow statistics sent to the controller by the switch as a result of the above RPC (get-all-flows-statistics-from-all-flow-tables)? And why is the operational datastore is not updated?
Thanks! Michael.

Using Boron, what you're trying to use is deprecated, hence you should use the following:
Install odl-openflowplugin-flow-services
Connect your switch
Send the following request:
POST /restconf/operations/opendaylight-direct-statistics:get-node-connector-statistics
Host: localhost:8181
Content-Type: application/json
Authorization: Basic YWRtaW46YWRtaW4=
{
"input":
{
"node" : "/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id=\"openflow:187811733353539\"]" ,
"store-stats" : false
}
}
Set store-stats to true if you want to keep them in the datastore.
You can also get the stats only for a specified port, but while trying, it appears not to be working well
Add this to the above payload:
"node-connector-id" : "/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id=\"openflow:187811733353539\"]/opendaylight-inventory:node-connector[opendaylight-inventory:id='openflow:187811733353539:LOCAL']",
Instead of LOCAL, specify the port you want.
Hope this helps,
Alexis

Related

Azure Function HTTP trigger not triggering every time from IoT Device

I have a Shelly Motion Sensor 2 that is set up to trigger an anonymous Azure Function using .NET 6 isolated process runtime. But the Azure Function HTTP trigger is not being triggered properly.
The device is not able to use HTTPS, so I am allowing HTTP. Developing locally the device triggers the Azure Function properly every time when calling GET http://192.168.1.83/api/ping. When using GET http://x-y-z.azurewebsites.net/api/ping the function doesn't get triggered most of the time, but it strangely enough does work other times. Using Postman or curl it works every time.
[Function("Ping")]
public async Task<HttpResponseData> Ping(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "ping")] HttpRequestData request)
{
var response = request.CreateResponse(HttpStatusCode.OK);
await response.WriteStringAsync("pong");
return response;
}
The Shelly device can trigger multiple HTTP GET requests at the same time and I've set the device up to trigger a Google Cloud Function, a http://webhook.site endpoint and a http://requestinspector.com endpoint. These also work every time.
Here are the logs from the device:
1674085196.780 action_queue_process: URL[http://a.cloudfunctions.net/ping] Connecting
1674085196.856 action_queue_process: URL[http://webhook.site/a-b-c-d] Connecting
1674085196.989 action_queue_process: URL[http://x-y-z.azurewebsites.net/api/ping] Connecting
1674085196.013 action_queue_process: URL[http://a.cloudfunctions.net/ping]: Done
1674085196.027 action_queue_process: URL[http://webhook.site/a-b-c-d]: Done
1674085196.098 action_queue_process: URL[http://x-y-z.azurewebsites.net/api/ping]: Done
The request looks like this according to requestinspector.com:
GET /inspect/abcxyz HTTP/1.1
requestinspector.com
Accept: */*
User-Agent: Mozilla/5.0
Accept-Encoding: gzip
I've tried mimicking the headers with curl, but it does not affect anything, the function is still triggered. I've checked Application Insights and logs, but most of the time I don't see anything since the request is not even registered.
I also tried having the device calling an Azure App Service HTTP endpoint and it did not register in Application Insights either.
Is there some sort of built-in protection in Azure preventing the requests from going through? Or am I missing something else? I'm using a B1 app service plan.

Intermittent 501 response from InvokeDeviceMethodAsync - Azure IoT

InvokeDeviceMethodAsync is intermittently (and only recently) returning a status code of 501 within the responses (the response body is null).
I understand this means Not Implemented. However, the method IS implemented - in fact, it's the only method that is. The device is using Microsoft.Azure.Devices.Client (1.32.0-preview-001 since we're also previewing the Device Streams feature).
Setup, device side
This is all called at startup. After this, some invocations succeed, some fail.
var deviceClient = DeviceClient.CreateFromConnectionString(connectionDetails.ConnectionString, TransportType.Mqtt);
await deviceClient.SetMethodHandlerAsync("RedactedMethodName", RedactedMethodHandler, myObj, cancel).ConfigureAwait(true);
Call, server side
var methodInvocation = new CloudToDeviceMethod("RedactedMethodName")
{
ResponseTimeout = TimeSpan.FromSeconds(60),
ConnectionTimeout = TimeSpan.FromSeconds(60)
};
var invokeResponse = await _serviceClient.InvokeDeviceMethodAsync(iotHubDeviceId, methodInvocation, CancellationToken.None);
What have I tried?
Check code, method registration
Looking for documentation about 501: can't find any
Looking through the source for the libraries (https://github.com/Azure/azure-iot-sdk-csharp/search?q=501). Just looks like "not implemented", i.e. nothing registered
Turning on Distributed Tracing from the Azure portal, with sampling rate 100%. Waited a long time, but still says "Device is not synchronised with desired settings"
Exploring intellisense on the DeviceClient object. Not much there!
What next?
Well, I'd like to diagnose.
What possible reasons are there for the 501 response?
Are there and diagnostic tools, e.g. logging, I have access to?
It looks like, there is no response from the method within the responseTimeoutInSeconds value, so for test purpose (and the real response error) try to use a REST API to invoke the device method.
You should received a http status code described here.

ECONNREFUSED when attempting to POST to emulator from within local Docker container

TLDR:
Can't post to local Cosmos Emulator. Can post to Azure Cosmos, but not with #azure/cosmos-sign, only with #azure/cosmos (which seems utterly bizare as the latter is supposedly built upon the former.) This is not ideal (as the message signing portion alone is very lightweight with REST API directly). Bug, or user error? Why do the instructions for enabling networking/https not seem to work?
Details:
I have a Node.js based app, and am using the Azure/cosmos-sign package to generate the correct headers via the generateHeaders method to save a JSON object in the local Cosmos Emulator.
Upon trying to post from the Node app to the URI provided in the Emulator Quickstart (https://localhost:8081), the error returned is...
Error: connect ECONNREFUSED 127.0.0.1:8081 : https://localhost:8081
As per these instructions...
Enable access to emulator on a local network
If you have multiple machines using a single network, and if you set
up the emulator on one machine and want to access it from other
machine. In such case, you need to enable access to the emulator on a
local network.
You can run the emulator on a local network. To enable network access,
specify the /AllowNetworkAccess option at the command-line, which
also requires that you specify /Key=key_string or
/KeyFile=file_name. You can use /GenKeyFile=file_name to generate
a file with a random key upfront. Then you can pass that to
/KeyFile=file_name or /Key=contents_of_file.
To enable network access for the first time, the user should shut down
the emulator and delete the emulator's data directory
%LOCALAPPDATA%\CosmosDBEmulator.
-https://learn.microsoft.com/en-us/azure/cosmos-db/local-emulator?tabs=cli%2Cssl-netstd21#enable-access-to-emulator-on-a-local-network
...I thought perhaps I needed to enable the networking functionality. It is all on the same (Windows) host (with the Node.js application running in Docker on the same host as the Emulator is installed). But this caused more problems with no benefit. With the generated key, I can load the included UI for managing the local emulator instance, but I then can't create Databases or Containers (without resetting the emulator and starting it again normally, eg: without the AllowNetworkAccess and related settings).
Attempting to use the included Explorer to create a Database returns...
Error while creating database SampleDb:
{
"code": 401,
"body": {
"code": "Unauthorized",
"message": "The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'post\ndbs\n\nmon, 29 mar 2021 23:33:45 gmt\n\n'\r\nActivityId: 29e4e700-d1b7-4d59-bdea-5931e4d6622d, Microsoft.Azure.Documents.Common/2.11.0"
},
"headers": {
"access-control-allow-credentials": "true",
"access-control-allow-origin": "https://localhost:8081",
"access-control-expose-headers": "Access-Control-Allow-Origin,Access-Control-Allow-Credentials,Content-Type,x-ms-activity-id,x-ms-gatewayversion",
"content-type": "application/json",
"date": "Mon, 29 Mar 2021 23:33:45 GMT",
"server": "Microsoft-HTTPAPI/2.0",
"x-firefox-spdy": "h2",
"x-ms-activity-id": "29e4e700-d1b7-4d59-bdea-5931e4d6622d",
"x-ms-gatewayversion": "version=2.11.0",
"x-ms-throttle-retry-count": 0,
"x-ms-throttle-retry-wait-time-ms": 0
},
"activityId": "29e4e700-d1b7-4d59-bdea-5931e4d6622d"
}
I did see this somewhat similar SO question, but it was abandoned.
This one, however seems to imply they simply reverted the KeyFile steps mentioned in the MS Docs. It seems odd that I am getting the same error from the Node.js POST regardless of if I use the AllowNetworkAccess switch or not.
Using the /NoFirewall switch as recommended here didnt resolve POSTs but did allow the Explorer UI to still work properly. The upvoted answer for that question is what I have already tried (/AllowNetworkAccess /KeyFile=...., and is not working, as explained above).
The docs here indicate that TLS (https) is in fact required...
"The Azure Cosmos DB Emulator supports only secure communication via TLS"
However, here they seem to indicate that, in the Node SDK (which relies on the same cosmos-sign library I am using)...
"TLS verification is disabled. By default the Node.js SDK(version 1.10.1 or higher) for the SQL API will not try to use the TLS/SSL certificate when connecting to the local emulator."
I tried adjusting the start script for my Node Docker image as suggested here...
If connecting to the Cosmos DB Emulator, disable TLS verification
for your node process:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
const client = new CosmosClient({ endpoint, key });
...and changed the start script in my package.json from...
"start": "node $NODE_OPTIONS node_modules...."
...to...
"start": "NODE_TLS_REJECT_UNAUTHORIZED=0 node $NODE_OPTIONS node_modules...."
...and rebuilt my images, but still receive the same ECONNREFUSED error from the Node client/app.
As I was reading the documentation for the REST API I was reminded that, as opposed to using the CosmosClient (which just needs the base URL), to do a post to the API the url needs to be fully formed as indicated here...
Method: POST
Request URI: https://{databaseaccount}.documents.azure.com/dbs/{db-id}/colls/{coll-id}/docs
Description: The {databaseaccount} is the name of the Azure Cosmos DB account created under your subscription. The {db-id} value is the
user generated name/ID of the database, not the system generated ID
(rid). The {coll-id} value is the name of the collection that contains
the document.
After appending /dbs/SampleDB/colls/SampleCollection/docs (yes, my entities are CamelCase) to the base url offered by the Emulator UI's Quickstart URI (https://localhost:8081)... I am still getting the ECONNREFUSED error to http posts.
Hmm... retargeted the Node app to point to a collection in my Azure Cosmos DB, and I am still having no luck.
400: Invalid API version. Ensure a valid x-ms-version header value is
passed. Please update to the latest version of Azure Cosmos DB
SDK.ActivityId: bfdeb339-8fef-4ba9-a03d-444a8664c02b,
Microsoft.Azure.Documents.Common/2.11.0
Added x-ms-version and set it to 2018-12-31 (latest, as per here).
Now I am getting (after trying both my secondary, and primary keys... just in case)...
401: The input authorization token can't serve the request. Please
check that the expected payload is built as per the protocol, and
check the key being used. Server used the following payload to sign:
'postdocsdbs/TopHand/colls/SampleTbltue, 30 mar 2021 02:54:25
gmt'ActivityId: bb258bb4-f5a8-4495-b0b5-b54fa8b7c46f,
Microsoft.Azure.Documents.Common/2.11.0
I verified that the required headers are all present. What can possibly be left?!
Base URI for Azure Cosmos had a trailing /, which ended up duplicated when the rest of the path was appended. Fixing the url string, still getting the 401.
A github issue pointed me to what may have been an error in the URL/REST path I was posting to. Rather than posting to (what I had previously)...
dbs/SampleDb/colls/SampleTbl/docs
...I changed it to...
dbs/SampleDb/colls/SampleTbl
...and am now getting error 405, MethodNotAllowed, RequestHandler.Post. 405 isn't listed as code returned by the Cosmos REST service.
This example in the MS docs definitely uses the /docs string at the end of the url/REST path.
Example
POST https://querydemo.documents.azure.com/dbs/1KtjAA==/colls/1KtjAImkcgw=/docs HTTP/1.1
x-ms-documentdb-partitionkey: ["Andersen"]
x-ms-date: Tue, 29 Mar 2016 02:28:29 GMT
authorization: type%3dmaster%26ver%3d1.0%26sig%3d92WMAkQv0Zu35zpKZD%2bcGSH%2b2SXd8HGxHIvJgxhO6%2fs%3d
Cache-Control: no-cache
User-Agent: Microsoft.Azure.Documents.Client/1.6.0.0
x-ms-version: 2015-12-16
Accept: application/json
Host: querydemo.documents.azure.com
Cookie: x-ms-session-token#0=602; x-ms-session-token=602
Content-Length: 344
Expect: 100-continue
{
"id": "AndersenFamily",
"LastName": "Andersen",
}
I contacted MS support and was giving some info that unblocked me (but doesn't entirely address the issues noted above).
For my own use-case, simply setting a key and allowing network access to the emulator was sufficient.
Note: This doesn't address the issues of the Emulator's Data Explorer becoming nonfunctional.
The feedback I received from the support personnel in regard to using the command line switches disabling the UI was...
By changing the key to something other than default one, you also
protect your emulator data from being seen via the Data Explorer.
Apparently the key alone isn't enough to protect the data, and disabling the UI is a "feature".
Solution: Simply executing...
.\Microsoft.Azure.Cosmos.Emulator.exe /AllowNetworkAccess /Key={insert your base64 encoded 64+ character string}
...allowed network access to systems on the same host as the emulator. This avoided all the certificate/key generation/importing/etc headache.
You must connect to the non-loopback IP of the host the emulator is running on to connect to it (writes/reads/etc).

google spanner close connection

I running my api based on swagger and node.
When my api is running for few minutes without a request and then i send an api request to it, I got this error:
{ Error: {"created":"#1488097564.272436000","description":"Delayed close due to in-progress write","file":"../src/core/ext/transport/chttp2/transport/chttp2_transport.c","file_line":406,"grpc_status":14,"referenced_errors":[{"created":"#1488097564.272434000","description":"Endpoint read failed","file":"../src/core/ext/transport/chttp2/transport/chttp2_transport.c","file_line":1851,"occurred_during_write":1,"referenced_errors":[{"created":"#1488097564.272350000","description":"Secure read failed","file":"../src/core/lib/security/transport/secure_endpoint.c","file_line":166,"referenced_errors":[{"created":"#1488097564.272347000","description":"Socket closed","fd":24,"file":"../src/core/lib/iomgr/tcp_posix.c","file_line":249,"target_address":"ipv4:172.217.19.10:443"}]}]}]}
at ClientReadableStream._emitStatusIfDone (/Users/aronsuarez/Code/asate/admin-apis/wiki-admin-api/node_modules/grpc/src/node/src/client.js:201:19)
at ClientReadableStream._readsDone (/Users/aronsuarez/Code/asate/admin-apis/wiki-admin-api/node_modules/grpc/src/node/src/client.js:169:8)
at readCallback (/Users/aronsuarez/Code/asate/admin-apis/wiki-admin-api/node_modules/grpc/src/node/src/client.js:242:12) code: 14, metadata: Metadata { _internal_repr: {} } }
I think google spanner is closing the connection, why and how can prevent it from to that?
The only way that i see in the docs is to send periodically a SELECT 1 request to spanner.
Is this correct?
Thanks for help

Browserstack reports successful even when test fails in Nightwatchjs

I just started using nightwatch with browserstack and I'm noticing that when we get a failed test, nightwatch registers the failure, but browserstack does not.
sample test I am using.
Also I am using free trial version of BrowserStack.
My question is:
Are there any ideas how to tell browserstack when a test run failed ?
From BrowserStack doc:
REST API
It is possible to mark tests as either a pass or a fail, using the
following snippet:
var request = require("request");
request({
uri: "https://user:key#www.browserstack.com/automate/sessions/<session-id>.json",
method: "PUT",
form: {
"status": "completed",
"reason":""
}
});
The two
potential values for status can either be completed or error.
Optionally, a reason can also be passed.
My questions are:
How I can get 'session-id' after test execution ?
What if I can see "completed" status in dashboard already ?
A session on BrowserStack has only three types of statuses:
Completed, Error or Timeout. Selenium (and hence, BrowserStack) does not have a way of understanding, if a test has passed or failed. Its by the multiple assertions in your tests that appear on your console, that you infer if a test has passed / failed. These assertions however, do not reach BrowserStack. As you rightly identified, you can use the REST-API, to change the status of the session to 'Error', if you see a failure in your console.
I would suggest fetching the session ID of the test as the test is being executed, since fetching the session ID after the test execution is a lengthy process. In Nightwatch, you can fetch session ID as follows:
browser.session(function(session) {
console.log(session.sessionId);
});
Yes, you can certainly change the status of the session once it is completed. That's where the REST-API comes to help!
If you came here searching for a solution in Python, you could use
requests.put(
"https://api.browserstack.com/automate/sessions/{}.json".format(driver.session_id),
auth=(USERNAME, ACCESS_KEY),
json={"status": "failed", "reason": "test failed"})

Resources