Unable to create neather policies or things on Eclipse Ditto local version - eclipse-ditto

i was able to run successfully local Eclipse Ditto version using docker container latest version downloaded from https://github.com/eclipse/ditto/tree/master/deployment/docker.
Following the tutorial I was training first to create a new policy using the following curl:
curl -X PUT 'http://localhost:8080/api/2/policies/my.test:policy' -u 'ditto:ditto' -H 'Content-Type: application/json' -d '{
"entries": {
"owner": {
"subjects": {
"nginx:ditto": {
"type": "nginx basic auth user"
}
},
"resources": {
"thing:/": {
"grant": [
"READ","WRITE"
],
"revoke": []
},
"policy:/": {
"grant": [
"READ","WRITE"
],
"revoke": []
},
"message:/": {
"grant": [
"READ","WRITE"
],
"revoke": []
}
}
}
}
}'
401 - Authentication is possible but has failed or not yet been provided, the same one I'm getting from the local swagger.
Trying to create it on sandbox: https://www.eclipse.org/ditto/http-api-doc.html#/
I'm getting: Undocumented
TypeError: NetworkError when attempting to fetch resource.
What I'm missing? i choose the API version 2 and authorize myself as a ditto user to start working.
Is there any additional configuration required to start working with the local version?
And what I'm doing wrong with the sandbox?

Related

create github organisation using API

I am trying to create organisation using github enterprise API, i am following this -
https://docs.github.com/en/enterprise-server#2.22/rest/reference/enterprise-admin#create-an-organization
Below is the API tried :
curl -u acme-admin:token -X POST -H "Accept: application/vnd.github.v3+json" https://acme.example.com/api/v3/admin/organizations -d '{"login":"acme-admin","admin":"acme-admin","profile_name":"exampleorg"}'
Response is as shown below, do any one have any clue about this ? the user "acme-admin" exists in the github enterprise and this user is administrator only.
{
"message": "Validation Failed",
"errors": [
{
"resource": "Organization",
"code": "custom",
"field": "login",
"message": "login is not available"
},
{
"resource": "Organization",
"code": "missing_field",
"field": "admins"
}
],
"documentation_url": "https://docs.github.com/enterprise/2.22/user/rest/reference/enterprise-admin#create-an-organization"
}
A similar script mentions:
curl -i -H "Authorization: token $gitub_api_token" ...
You can see that header defined in "REST API / Other authentication methods".
Check if this works better than -u acme-admin:token.

Webhook is created, but callback is never hit

I'm trying to remove the pooling and integrate webhooks in the process of file conversion. The problem is that the webhook is created but the callback is never called back.
I'm following the instructions from here: https://forge.autodesk.com/en/docs/webhooks/v1/tutorials/create-a-hook-model-derivative/
The web server is started by the following command : ngrok http host-header=rewrite https://localhost:44366
The callback is http://f36a47b8.ngrok.io/derivative and is up and running. Post requests from postman(internal network) and Post requests from external networks (cellular data) are reaching the endpoint and are successfully redirected.
A hook is created:
"hookId": "51897b50-522a-11ea-b885-f34f23e3435e",
"tenant": "c0761189-32dd-4ca3-9e52-3ae400f91651",
"callbackUrl": "http://f36a47b8.ngrok.io/derivative",
"createdBy": "HUpqLPysSUmbFGlhQo0uG8XMqimfQnRG",
"event": "extraction.updated",
"createdDate": "2020-02-18T08:40:29.829+0000",
"system": "derivative",
"creatorType": "Application",
"status": "active",
"scope": {
"workflow": "c0761189-32dd-4ca3-9e52-3ae400f91651"
},
"urn": "urn:adsk.webhooks:events.hook:51897b50-522a-11ea-b885-f34f23e3435e",
"__self__": "/systems/derivative/events/extraction.updated/hooks/51897b50-522a-11ea-b885-f34f23e3435e"
}
Than a call to modelderivative/v2/designdata/job is issued with the following content:
var job = new JobRequest
{
Input = new Input
{
Urn = urnBase64,
},
Output = new Output
{
Formats = new List<Format>
{
new Format
{
Type = "svf",
Views = new List<string> { "2d", "3d" }
}
},
Destination = new Destination { Region = "EMEA" }
},
Misc = new Misc
{
Workflow = workflowId
}
};
The response is success with an urn (like before);
And from that point nothing follows from the webhook. The callback is never reached, even though that within some time the file is converted and it can be loaded in the viewer as before.
I've viewed those topics:
Unable to receive Forge webhooks, or unable to get them to fire
Why is webhook workflow not taken into consideration when creating modelderivative job?
but they didn't helped.
What am i missing ?
It turns out that there is a problem with jobs for derivative API in 'EMEA' region where no callbacks are called when a job finishes. Changing the region to 'us' fixes the issue and callback is hit when a job event occurs.
From the documentation example change the region parameter:
curl -X 'POST' \
-H 'Content-Type: application/json; charset=utf-8' \
-H 'Authorization: Bearer PtnrvrtSRpWwUi3407QhgvqdUVKL' \
-H 'x-ads-force: false' -v 'https://developer.api.autodesk.com/modelderivative/v2/designdata/job' \
-d
'{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bW9kZWxkZXJpdmF0aXZlL0E1LnppcA",
"compressedUrn": true,
"rootFilename": "A5.iam"
},
"output": {
"destination": {
"region": "us" <- Change the region form 'EMEA' to 'us'
},
"formats": [
{
"type": "svf",
"views": [
"2d",
"3d"
]
}
]
}
}'

How to build a NodeJS variable to create a regexp query for ElasticSearch

I'm working at a company that used to have a monolithic PHP/MySQL CMS which controlled the website, but we are now trying to get the website to pull data from our API rather than directly from MySQL. The API is simply ElasticSearch on AWS. I wrote some code which now moves our data from MySQL to ElasticSearch. And now I can get the data I want with a curl call like this:
curl --verbose -d '{"from" : 0, "size" : 10000, "query": { "bool": { "should": [ { "regexp": { "string-of-words-associated-with-this-document": { "value": ".*steel.*" } } }, { "regexp": { "string-of-words-associated-with-this-document": { "value": ".*services.*" } } } ] } } }' -H 'Content-Type: application/json' -X GET "https://search-sameday01-ntsw7b7shy3wu.us-east-1.es.amazonaws.com/crawlers/_search?pretty=true"
This works great. Each document in ElasticSearch has a field that contains the words we want to query against, and we match against that field using regexp queries.
Now I'm writing a new app that checks data coming in from our web crawlers, and looks to see if we have certain names already in our database. The new app is a NodeJS app, so I decided to use this library:
https://github.com/elastic/elasticsearch-js
I need to build up what might be many regexp clauses, so I go into a loop and build up many clauses in an array:
array_of_elasticsearch_clauses_should_match.push( { "regexp": { "string-of-words-associated-with-this-document": { "value": ".*" + word_sanitized + ".*" } } } );
So I thought I could then pass in this variable like this:
es_client.search({
index: 'crawlers',
type: 'sameday',
body: {
query: {
bool: {
should: array_of_elasticsearch_clauses_should_match
}
}
}
}).then(function (resp) {
But I get this error:
Trace: [parsing_exception] [array_of_elasticsearch_clauses_should_match] query malformed, no start_object after query name, with { line=1 & col=75 }
How could I build up the regexp clauses in a variable and then pass it in?

api call to upload an image to object storage container on bluemix node js app

Iam trying to upload an image to object storage container and get the url of that image deployed on bluemix using a node js app.To achieve this i need to use a post or put api call.I could able to authenticate with the object storage but not able to achieve the functionality through the api calls.So,I need some help on the api calls.So can some one help me out in this if you had worked on such kind of api calls with images on object storage.(using object-storage npm).Even share any kind of sample working api calls.Any help appreciated.
Object storage api's are derived from the OpenStack Swift API spec. In order to add an object of any sort to a Bluemix Object Storage container, you'll need to do 2 things:
Authenticate to the Object Storage instance to obtain an authorization token.
Perform actions on the container using the token obtained.
I assume that you already have access to the JSON credentials provided by the object storage service ... similar to:
{
"auth_url": "https://identity.open.softlayer.com",
"domainId": "nice_long_hex_value",
"domainName": "some_number",
"password": "not_gonna_tell_you",
"project": "object_storage_hex_value",
"projectId": "project_hex_value",
"region": "dallas",
"userId": "another_fine_hex_value",
"username": "some_text_with_hex_values"
}
Step 1: Obtain X-Auth-token. 4 items (user_id, user_name, password and auth_url) should come from your provided credentials.
curl -i -X POST -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"id": "another_fine_hex_value",
"password": "not_gonna_tell_you"
}
}
},
"scope": {
"project": {
"id": "project_hex_value"
}
}
}
}' "{auth_url}/v3/auth/tokens" | tee response.txt | grep X-Subject-Token | sed 's/.*X-Subject-Token: \([^ ]*\).*/\1/g' | tee >(awk '{printf("\nX-Auth-Token: %s\n\nJSON Response Body:\n", $0)}' > /dev/tty) | sed -n '/{/,$p' <response.txt | python -m json.tool && rm response.txt
This should result in a 500+ Line JSON Response BODY (take note of the public interface for the region of dallas within the swift endpoints array) similar to …
{
"token": {
"methods": [
"password"
],
"roles": [
{
"id": "redacted",
"name": "ObjectStorageOperator"
}
],
"expires_at": "2016-03-09T20:26:39.192753Z",
"project": {
"domain": {
"id": "some_hex_value",
"name": "some_int"
},
"id": "another_hex_value",
"name": "one_more_hex_value"
},
"catalog": [
...
{
"endpoints": [
{
"region_id": "london",
...
},
{
...
},
{
"region_id": "dallas",
"url": "https://dal.objectstorage.open.softlayer.com/v1/AUTH_",
"region": "dallas",
"interface": "public",
"id": "some_unique_id"
},
{
...
},
{
...
},
{
...
}
],
"type": "object-store",
"id": "hex_values_rock",
"name": "swift"
},
...
],
"extras": {},
"user": {
"domain": {
"id": "hex_value",
"name": "another_fine_int"
},
"id": "tired_of_hex_values_yet?",
"name": "cheers_one_more_hex_value_for_the_road"
},
...
}
}
Specifically, we want to identify the Swift Object Storage API url in the form:
https://dal.objectstorage.open.softlayer.com/v1/AUTH_some-hex-value
https://dal.objectstorage.open.softlayer.com/v1/AUTH_some-hex-value is
linked to your desired object storage region (dallas, london, …) and associated with a public interface. This will be found within the endpoints section which includes the name “swift”.
Even more importantly, within the generated HTTP Response Header of this /v3/auth/tokens call is an authentication token that we also need to record to facilitate subsequent authenticated HTTP API calls.
Here is a sample of the HTTP Response Headers
Connection: Keep-Alive
Content-Length: 12089
Content-Type: application/json
Date: Wed, 09 Mar 2016 19:26:39 GMT
Keep-Alive: timeout=5, max=21
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips mod_wsgi/3.4 Python/2.7.5
Vary: X-Auth-Token
X-Subject-Token: gAAAAABW4Hjv5O8yQRwYbkV81s7KC0mTxlh_tXTFtzDEf3ejsP_CByfvvupOeVWWcWrB6pfVbUyG5THZ6qM1-BiQcBUo1WJOHWDzMMrEB5nru69XBd-J5f5GISOGFjIxPPnNmEDZT_pahnBwaBQiJ8vrg9p5obdtRJeuxk7ADVRQFcBcRhAL-PI
x-openstack-request-id: req-26a078fe-d0a7-4a75-b32d-89d3461c55f1
The X-Subject-Token is the important response header. Its value will be reused within all subsequent HTTP Request Headers using the header X-Auth-Token. Obvious, right?
Step 2: With this token, let's add an object to a container named "ibmjstart".
curl -s -X PUT -i -H "Content-Type: text/plain"\
-H "X-Auth-Token: X-Subject-Token from above"\
-H "Cache-Control: no-cache"\
-d "Awesome sauce is best served warm" "{API AUTH URL obtained above}/ibmjstart/test.txt"
If all goes well, this should result in a new container named ibmjstart which contains a text file named test.txt with a single line of content.

Chef, override attributes not applying to recipe

This is the cookbook I downloaded:
https://github.com/edelight/chef-mongodb
I installed Chef server, Chef workstation, and have a testnode ready to bootstrap.
A role I created:
$ knife role create mongodb_standalone_testproj
JSON format:
{
"name": "mongodb_standalone_testproj",
"description": "Deploy MongoDB standalone with override attributes",
"json_class": "Chef::Role",
"default_attributes": {
},
"override_attributes": {
"mongodb::default": {
"port": "27060",
"dbpath": "/data/"
}
},
"chef_type": "role",
"run_list": [
"recipe[mongodb::default]"
],
"env_run_lists": {
}
}
However, when I bootstrap the testnode with this role:
knife bootstrap testnode --sudo -x <omit> -P <omit> -N testnode -r 'role[mongodb_standalone_testproj]'
log here: http://pastebin.com/DWxY3vNV
Problem is, MongoDB installed and ran on testnode but the override attributes (port and dbpath) did not get applied, any clues?
Those attributes are not correct:
"override_attributes": {
"mongodb::default": {
"port": "27060",
"dbpath": "/data/"
}
},
I'm willing to bet you want:
"override_attributes": {
"mongodb": {
"config": {
"port": "27060",
"dbpath": "/data/"
}
}
},
I think you're just overriding the wrong attribute (not the full attribute). Looking at the defaults for the mongodb cookbook they list:
default['mongodb']['config']['port'] = 27017
But you are using the equivalent of:
['mongodb']['port']

Resources