Can't establish a conection between two hosts in mininet by using a Ryu REST API - ryu

1、I used this to create a topology in Mininet using the following command:
sudo mn --topo=tree,1,3 --controller=remote --mac
2、I use the Ryu controller "ofctl_rest.py", which processes REST requests. The REST requests send to the controller contain information which the controller use to install flow entries in the switches. I start the controller with the following command:
ryu-manager simple_switch_13.py ofctl_rest.py --verbose
3、I used the following command to clear the flow table in the switch:
dpctl del-flows
4、I configure the flow table between two hosts, H1 and H2, in the switch using the following command:
curl -X POST -d '{
"dpid": 1,
"cookie": 0,
"table_id": 0,
"priority": 1,
"flags": 1,
"match":{
"in_port":1,
"dl_src":"00:00:00:00:00:01/00:00:00:00:00:01",
"dl_dst":"00:00:00:00:00:02/00:00:00:00:00:02"
},
"actions":[
{
"type":"OUTPUT",
"port": 2
}
]
}' http://localhost:8080/stats/flowentry/add
curl -X POST -d '{
"dpid": 1,
"cookie": 0,
"table_id": 0,
"priority": 1,
"flags": 1,
"match":{
"in_port":2,
"dl_src":"00:00:00:00:00:02/00:00:00:00:00:02",
"dl_dst":"00:00:00:00:00:01/00:00:00:00:00:01"
},
"actions":[
{
"type":"OUTPUT",
"port": 1
}
]
}' http://localhost:8080/stats/flowentry/add
5、The rules are installed in the switch:
mininet> dpctl dump-flows
enter image description here
6、 I can establish a connection. If I issue the following comand in the mininet-CLI:
h1 ping h2
enter image description here
7、Based on this I continue to add the flow table between hosts H1 and H3:
curl -X POST -d '{
"dpid": 1,
"cookie": 0,
"table_id": 0,
"priority": 1,
"flags": 1,
"match":{
"in_port":3,
"dl_src":"00:00:00:00:00:03/00:00:00:00:00:03",
"dl_dst":"00:00:00:00:00:01/00:00:00:00:00:01"
},
"actions":[
{
"type":"OUTPUT",
"port": 1
}
]
}' http://localhost:8080/stats/flowentry/add
curl -X POST -d '{
"dpid": 1,
"cookie": 0,
"table_id": 0,
"priority": 1,
"flags": 1,
"match":{
"in_port":1,
"dl_src":"00:00:00:00:00:01/00:00:00:00:00:01",
"dl_dst":"00:00:00:00:00:03/00:00:00:00:00:03"
},
"actions":[
{
"type":"OUTPUT",
"port": 3
}
]
}' http://localhost:8080/stats/flowentry/add
8、The rules are installed in the switch:
mininet> dpctl dump-flows
enter image description here
9、But I can't establish a connection. If I issue the following comand in the mininet-CLI:
h1 ping h3
enter image description here
10、now,The result of the whole experiment was that H1 ping H2 succeeded and H1 ping H3 failed. If you reverse the sequence of the configuration flow table, the result is that H1 pings H3 successfully and H1 pings H2 fails:
enter image description here
11、What do I wrong? Please help me.

Related

"Testlog - Get Test Result Logs" in Azure devops services- REST API for Manual Testing

Getting Response as
{"value":[],"count":0}
when tried to get the logs of test results and test run for manual testing using the REST API.
Sample Requests:
Test Result:
GET https://vstmr.dev.azure.com/{organization}/{project}/_apis/testresults/runs/{runId}/results/{resultId}/testlog?type=generalAttachment&api-version=6.0-preview.1
Test Run:
GET https://vstmr.dev.azure.com/{organization}/{project}/_apis/testresults/runs/{runId}/testlog?type=generalAttachment&api-version=6.0-preview.1
Looking for guidance to get the required response as mentioned below:
{
"logReference": {
"scope": 0,
"buildId": 0,
"releaseId": 0,
"releaseEnvId": 0,
"runId": 1,
"resultId": 0,
"subResultId": 0,
"type": 1,
"filePath": "textAsFileAttachment.txt"
},
"modifiedOn": "/Date(123456789)/",
"size": 65826,
"metaData": {}
}

Listing GitLab Repositories name using GitLab api

I want to list only GitLab repositories name using GitLab API.
i tried command curl "https://gitlab.com/api/v4/projects?private_token=*************"
it is listing all merge requests, issues and also repository names.
How can i list only the repository names?
As Igor mentioned the Gitlab API does not let you to limit the response to a single field.
I would recommend combining the original API call with the jq command.
This will enable parsing of the returned json
curl "https://gitlab.com/api/v4/projects?private_token=*************" | jq '.[].name'
There's no way to limit the response to a single field only via API, but there's simple option to return minimal set of fields:
https://docs.gitlab.com/ee/api/projects.html#list-all-projects
curl "https://gitlab.com/api/v4/projects?simple=true&private_token=*************" would return something like:
[
{
"id": 4,
"description": null,
"default_branch": "master",
"ssh_url_to_repo": "git#example.com:diaspora/diaspora-client.git",
"http_url_to_repo": "http://example.com/diaspora/diaspora-client.git",
"web_url": "http://example.com/diaspora/diaspora-client",
"readme_url": "http://example.com/diaspora/diaspora-client/blob/master/README.md",
"tag_list": [ //deprecated, use `topics` instead
"example",
"disapora client"
],
"topics": [
"example",
"disapora client"
],
"name": "Diaspora Client",
"name_with_namespace": "Diaspora / Diaspora Client",
"path": "diaspora-client",
"path_with_namespace": "diaspora/diaspora-client",
"created_at": "2013-09-30T13:46:02Z",
"last_activity_at": "2013-09-30T13:46:02Z",
"forks_count": 0,
"avatar_url": "http://example.com/uploads/project/avatar/4/uploads/avatar.png",
"star_count": 0
},
{
"id": 6,
"description": null,
"default_branch": "master",
...

AWS ECS create service waits for user input while showing output on console

I am using circleci job to create an ECS service. Below is the aws cli command that I'm using to create ECS service.
aws ecs create-service --cluster "test-cluster" --service-name testServiceName \
--task-definition testdef:1 \
--desired-count 1 --launch-type EC2
When this command is executing following error is occurred and the CircleCI job is failed.
{ress RETURN)
"service": {
"serviceArn": "arn:aws:ecs:*********:<account-id>:service/testServiceName",
"serviceName": "testServiceName",
"clusterArn": "arn:aws:ecs:*********:<account-id>:cluster/test-cluster",
"loadBalancers": [],
"serviceRegistries": [],
"status": "ACTIVE",
"desiredCount": 1,
"runningCount": 0,
"pendingCount": 0,
"launchType": "EC2",
"taskDefinition": "arn:aws:ecs:*********:<account-id>:task-definition/testdef*********:1",
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 100
},
"deployments": [
{
"id": "ecs-svc/1585305191116328179",
"status": "PRIMARY",
:
Too long with no output (exceeded 10m0s): context deadline exceeded
Running the command locally on a minimized terminal window gives the following output
{
"service": {
"serviceArn": "arn:aws:ecs:<region>:<account-id>:service/testServiceName",
"serviceName": "testServiceName",
"clusterArn": "arn:aws:ecs:<region>:<account-id>:cluster/test-cluster",
"loadBalancers": [],
"serviceRegistries": [],
"status": "ACTIVE",
"desiredCount": 1,
"runningCount": 0,
"pendingCount": 0,
"launchType": "EC2",
"taskDefinition": "arn:aws:ecs:<region>:<account-id>:task-definition/testdef:1",
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 100
},
"deployments": [
{
"id": "ecs-svc/8313453507891259676",
"status": "PRIMARY",
"taskDefinition": "arn:aws:ecs:<region>:<account-id>:task-definition/testdef:1",
"desiredCount": 1,
:
The further execution is stopped until I hit some key. This is the reason that CircleCI job is failing after 10m threshold limit. When I run the command in a full screen terminal locally then it does not wait and shows the output.
Is there any way that the command is run in such a way that it does not wait for any key to be hit and execution is completed so that the pipeline does not fail. Please note that the ECS service is created successfully.

How to get the last update of the index in elasticsearch

How can I find datetime of last update of the elsasticsearch index?
Elasticsearch index last update time I tried to follow the example , but nothing happened .
curl -XGET 'http://localhost:9200/_all/_mapping'
{"haystack":{"mappings":{"modelresult":{"_all":{"auto_boost":true},"_boost":{"name":"boost","null_value":1.0},"properties":{"act_name":{"type":"string","boost":1.3,"index_analyzer":"index_ngram","search_analyzer":"search_ngram"},"django_ct":{"type":"string","index":"not_analyzed","include_in_all":false},"django_id":{"type":"string","index":"not_analyzed","include_in_all":false},"hometown":{"type":"string","boost":0.9,"index_analyzer":"index_ngram","search_analyzer":"search_ngram"},"id":{"type":"string"},"text":{"type":"string","analyzer":"ngram_analyzer"}}},"mytype":{"_timestamp":{"enabled":true,"store":true},"properties":{}}}}}
curl -XPOST localhost:9200/your_index/your_type/_search -d '{
"size": 1,
"sort": {
"_timestamp": "desc"
},
"fields": [
"_timestamp"
]
}'
{"took":2,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":99,"max_score":null,"hits":[{"_index":"haystack","_type":"modelresult","_id":"account.user.96","_score":null,"sort":[-9223372036854775808]}]}}
What is wrong?
First, what you need to do is to proceed like in that linked question and enable the _timestamp field in your mapping.
{
"modelresult" : {
"_timestamp" : { "enabled" : true }
}
}
Then you can query your index for a single document with the most recent timestamp like this:
curl -XPOST localhost:9200/haystack/modelresult/_search -d '{
"size": 1,
"sort": {
"_timestamp": "desc"
},
"fields": [
"_timestamp"
]
}'

Elasticsearch mac address search/mapping

I can't get the mac address search to return proper results when I'm doing partial searches (half octect). I mean, if I look for the exact mac address I get results but if try to search for partial search like "00:19:9" I don't get anything until I complete the octet.
Can anyone point out which mapping should I use to index it or kind of search query should I use??
curl -XDELETE http://localhost:9200/ap-test
curl -XPUT http://localhost:9200/ap-test
curl -XPUT http://localhost:9200/ap-test/devices/1 -d '
{
"user" : "James Earl",
"macaddr" : "00:19:92:00:71:80"
}'
curl -XPUT http://localhost:9200/ap-test/devices/2 -d '
{
"user" : "Earl",
"macaddr" : "00:19:92:00:71:82"
}'
curl -XPUT http://localhost:9200/ap-test/devices/3 -d '
{
"user" : "James Edward",
"macaddr" : "11:19:92:00:71:80"
}'
curl -XPOST 'http://localhost:9200/ap-test/_refresh'
curl -XGET http://localhost:9200/ap-test/devices/_mapping?pretty
When I to find exact matches I get them correctly....
curl -XPOST http://localhost:9200/ap-test/devices/_search -d '
{
"query" : {
"query_string" : {
"query":"\"00\\:19\\:92\\:00\\:71\\:80\""
}
}
}'
# RETURNS:
{
"took": 6,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.57534903,
"hits": [
{
"_index": "ap-test",
"_type": "devices",
"_id": "1",
"_score": 0.57534903,
"_source": {
"user": "James Earl",
"macaddr": "00:19:92:00:71:80"
}
}
]
}
}
HOWEVER, I need to be able to match partial mac addresses searches like this:
curl -XPOST http://localhost:9200/ap-test/devices/_search -d '
{
"query" : {
"query_string" : {
"query":"\"00\\:19\\:9\""
}
}
}'
# RETURNS 0 instead of returning 2 of them
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
SO, What mapping should I use? Is there a better query string to accomplish this? BTW, what's the difference between using 'query_string' and 'text'?
It looks like you haven't defined a mapping at all, which means elasticsearch will guess off your datatypes and use the standard mappings.
For the field macaddr, this will be recognised as a string and the standard string analyzer will be used. This analyzer will break up the string on whitespace and punctuation, leaving you with tokens consisting of pairs of numbers. e.g. "00:19:92:00:71:80" will get tokenized to 00 19 92 00 71 80. When you search the same tokenization will happen.
What you want is to define an analyzer which turns "00:19:92:00:71:80" into the tokens 00 00: 00:1 00:19 etc...
Try this:
curl -XPUT http://localhost:9200/ap-test -d '
{
"settings" : {
"analysis" : {
"analyzer" : {
"my_edge_ngram_analyzer" : {
"tokenizer" : "my_edge_ngram_tokenizer"
}
},
"tokenizer" : {
"my_edge_ngram_tokenizer" : {
"type" : "edgeNGram",
"min_gram" : "2",
"max_gram" : "17"
}
}
}
}
}'
curl -XPUT http://localhost:9200/ap-test/devices/_mapping -d '
{
"devices": {
"properties" {
"user": {
"type": "string"
},
"macaddr": {
"type": "string",
"index_analyzer" : "my_edge_ngram_analyzer",
"search_analyzer": "keyword"
}
}
}
}'
Put the documents as before, then search with the query specifically aimed at the field:
curl -XPOST http://localhost:9200/ap-test/devices/_search -d '
{
"query" : {
"query_string" : {
"query":"\"00\\:19\\:92\\:00\\:71\\:80\"",
"fields": ["macaddr", "user"]
}
}
}'
As for your last question, the text query is deprecated.
Good luck!
After some research I found and easier way to make it work.
Elasticsearch query options are confusing sometimes because they have so many options...
query_string: has a full-fledged search with a myriad of options and
wildcard uses.
match: is simpler and doesn't require wildcard
characters, or other “advance” features. This one it's great to use
it in search boxes because chances of it failing are very small if not non-existent.
So, that said. This is the one that work the best in most cases and didn't required customized mapping.
curl -XPOST http://localhost:9200/ap-test/devices/_search -d '
{
"query" : {
"match_phrase_prefix" : {
"_all" : "00:19:92:00:71:8"
}
}
}'

Resources