Trying to create a PutMongo processor in nipyapi connected with local instance. I have specified all the required configurations but doesnt seem to work.
PutMongoFile = canvas.create_processor(
root_process_group,
processor_PutMongo,
(randrange(0,4000), randrange(0,4000)),
name=None,
config=processor_PutMongo_config)
Get the following error:
*AttributeError Traceback (most recent call last)
<ipython-input-48-ef1b815cdbdb> in <module>
----> 1 PutMongoFile = canvas.create_processor(root_process_group,processor_PutMongo,(randrange(0,4000),randrange(0,4000)),name=None,config=processor_PutMongo_config)
~\AppData\Roaming\Python\Python38\site-packages\nipyapi\canvas.py in create_processor(parent_pg, processor, location, name, config)
503 """
504 if name is None:
--> 505 processor_name = processor.type.split('.')[-1]
506 else:
507 processor_name = name
AttributeError: 'list' object has no attribute 'type'*
Appreciate any help!!!
We resolved this in an Issue discussion on the Repo here. The get_processor_type method is greedy by default and will return a list if more than one Processor Type is found, in this case finding both PutMongo and PutMongoRecord. I have updated the method to allow exact matching only, and implemented better checks for this in the next release
processor_PutMongo = canvas.get_processor_type('org.apache.nifi.processors.mongodb.PutMongo', identifier_type='name')
This returns a list of dictionaries containing details of both the processors, PutMongo & PutMongoRecord.
This is the exact JSON you would get when you would be printing processor_PutMongo:
[
{
"bundle":{
"artifact":"nifi-mongodb-nar",
"group":"org.apache.nifi",
"version":"1.13.2"
},
"controller_service_apis":"None",
"deprecation_reason":"None",
"description":"Creates FlowFiles from documents in MongoDB loaded by a user-specified query.",
"explicit_restrictions":"None",
"restricted":false,
"tags":[
"read",
"get",
"mongodb"
],
"type":"org.apache.nifi.processors.mongodb.GetMongo",
"usage_restriction":"None"
},
{
"bundle":{
"artifact":"nifi-mongodb-nar",
"group":"org.apache.nifi",
"version":"1.13.2"
},
"controller_service_apis":"None",
"deprecation_reason":"None",
"description": "A record-based version of GetMongo that uses the Record writers to write the MongoDB result set.",
"explicit_restrictions":"None",
"restricted":false,
"tags":[
"mongo",
"get",
"fetch",
"record",
"json",
"mongodb"
],
"type":"org.apache.nifi.processors.mongodb.GetMongoRecord",
"usage_restriction":"None"
}
]
The shortest solution would be to just extract either the first or the second element of the list by their indexes.
For e.g.
PutMongo = canvas.create_processor(new_processor_group, processor_PutMongo[0],(randrange(0,20000),randrange(0,20000)),name=None,config=processor_PutMongo_config)
Related
When I try to get the number of python files in a public github repo using the following query:
https://api.github.com/search/code?q=*%20extension:py%20repo:VismithaMona/codedeploytest2
I got the response:
{ "total_count": 0, "incomplete_results": false, "items": [] }
However, that repo has at least 2 py files, as you may find from the following link
https://github.com/VismithaMona/codedeploytest2
Then why Github API not working here? Is it because the repo is too old?
I tried searching for related questions using Google, but no similar questions were ever asked.
This is explained by an announcement on the Github blog from December 2020:
Starting today, GitHub Code Search will only index repositories that have had recent activity within the last year. Recent activity for a repository means that it has had a commit or has shown up in a search result. If the repository does not have any activity for an entire year, the repository will be purged from the Code Search index. This change will enable the most relevant content for developers to surface in the code search index as well as keeping code search queries fast for all customers.
So you don't find results because the repository has not been updated recently enough.
However, the API endpoint https://api.github.com/repos/VismithaMona/codedeploytest2/contents/ is still available. This lists the files and folders starting in the root folder of the repository. Small excerpt of the json response, with various files and properties removed:
[
{
"name": "scripts",
"url": "https://api.github.com/repos/VismithaMona/codedeploytest2/contents/scripts?ref=master",
"type": "dir",
},
{
"name": "templates",
"url": "https://api.github.com/repos/VismithaMona/codedeploytest2/contents/templates?ref=master",
"type": "dir",
},
{
"name": "test_app.py",
"type": "file",
},
{
"name": "web.py",
"type": "file",
}
]
Using this response you can count the number of .py files in the root folder. If you also want to search in subfolders then you can write a small recursive function to also access the provided urls, for example https://api.github.com/repos/VismithaMona/codedeploytest2/contents/scripts?ref=master for the scripts subfolder.
For example in Python:
import requests
def checkfiles(url, extension):
print(f"check {url}")
files_found = 0
response = requests.request("GET", url)
if response.status_code == 200:
contents = response.json()
for item in contents:
if item['type'] == 'file' and item['name'][-len(extension):] == extension:
files_found += 1
print(f"found: {item['name']}")
if item['type'] == 'dir':
# recursive call here
files_found += checkfiles(item['url'], extension)
return(files_found)
total_files = checkfiles("https://api.github.com/repos/VismithaMona/codedeploytest2/contents", "py")
print("number of files with extension .py:", total_files)
Output:
check https://api.github.com/repos/VismithaMona/codedeploytest2/contents
check https://api.github.com/repos/VismithaMona/codedeploytest2/contents/scripts?ref=master
found: stop_flask.py
check https://api.github.com/repos/VismithaMona/codedeploytest2/contents/templates?ref=master
found: test_app.py
found: web.py
number of files with extension .py: 3
Is this possible somehow to get a file history (all related changesets) with API request if the file was branched or/and renamed?
For example, if I need to find a history of the object in Azure DevOps UI I can search this object in the project, in a certain path like this:
And if I need to find the first appearance of the object in a repository, I can get it by expanding a "branch and rename" history
There is a need to get this information via API requests.
I had tried to find some API requests which can do it, but found only the requests which can return only the changesets which are on the first picture, where the object has the same name and is located under the path defined in the search parameter - there is no information about renaming/branching operations.
GET https://dev.azure.com/Contoso/_apis/tfvc/changesets?api-version=6.0&searchCriteria.itemPath=$/Contoso/Trunk/Main/Metadata/Application_Ext_Contoso/Application_Ext_Contoso/AxSecurityPrivilege/Entity.xml
returns only 3 changesets - 2162, 2161, 391
POST https://dev.azure.com/Contoso/Contoso/_api/_versioncontrol/history?api-version=6.0
With the body request
{
"repositoryId":"",
"searchCriteria":"{\"itemPaths\":[\"$/Contoso/Trunk/Main/Metadata/Application_Ext_Contoso/Application_Ext_Contoso/AxSecurityPrivilege/Entity.xml\" ], \"followRenames\" : true ,\"top\":50}",
"includeSourceRename" : true
}
Also returns only 3 changesets, it only finds a specific item path, I tried to experiment with includeSourceRename and followRenames , but they do not work as I expected.
POST https://almsearch.dev.azure.com/Contoso/Contoso/_apis/search/codesearchresults?api-version=6.0-preview.1
with the body
{
"searchText": "Entity.xml",
"$skip": 0,
"$top": 25,
"filters": {
"Project": [
"Contoso"
],
"Repository": [
"$/Contoso"
],
"Path": [
"$/Contoso/"
]
},
"$orderBy": [
{
"field": "filename",
"sortOrder": "ASC"
}
],
"includeFacets": true
}
Also returns information only about 3 changesets.
Are there some approaches to get this information from the API request?
I am using Perspective API (you can check out at: http://perspectiveapi.com/) for my discord application. I am sending an analyze request and api returning this:
{
"attributeScores": {
"TOXICITY": {
"spanScores": [
{
"begin": 0,
"end": 22,
"score": {
"value": 0.9345592,
"type": "PROBABILITY"
}
}
],
"summaryScore": {
"value": 0.9345592,
"type": "PROBABILITY"
}
}
},
"languages": [
"en"
],
"detectedLanguages": [
"en"
]
}
I need to get "value" in "summaryScore" as an integer. I searched it on Google, but i just found reading value for not categorized or only 1 times categorized json files. How can i do that?
Note: Sorry if i asked something really easy or if i slaughtered english. My primary language is not english and i am not much experienced on node.js
First you must make sure the object you have recived is presived by nodeJS as a JSON object, look at this answer for how first. After the object is stored as a JSON object you can do the following:
Reading from nested objects or arrays is as easy as doing this:
object.attributeScores.TOXICITY.summaryScore.value
If you look closer to the object and its structure you can see that the root object (the first {}) contains 3 values: "attributeScores", "languages" and "detectedLanguages".
The field you are looking for exists inside the "summeryScore" object that exists inside the "TOXICITY" object and so on. Thus you need to traverse the object structure until you get to the value you need.
I have below terraform configuration for cognito client:
data "aws_cognito_user_pools" "re_user_pool" {
name = "${var.cognito_user_pool_name}"
}
resource "aws_cognito_user_pool_client" "app_client" {
name = "re-app-client"
user_pool_id = data.aws_cognito_user_pools.re_user_pool.id
depends_on = [data.aws_cognito_user_pools.re_user_pool]
explicit_auth_flows = ["USER_PASSWORD_AUTH"]
prevent_user_existence_errors = "ENABLED"
allowed_oauth_flows_user_pool_client = true
allowed_oauth_flows = ["code"]
allowed_oauth_scopes = ["phone", "openid", "email", "profile", "aws.cognito.signin.user.admin"]
supported_identity_providers = ["COGNITO", "Google"]
callback_urls = ["https://scnothzsf0.execute-api.ap-southeast-2.amazonaws.com/staging/signup"]
}
I references the cognito user pool which already exists on AWS. The error happens on the line user_pool_id = data.aws_cognito_user_pools.re_user_pool.id when it uses the user pool id in aws_cognito_user_pool_client.
I will get the error
Error: Error creating Cognito User Pool Client: InvalidParameterException: 1 validation error detected: Value 're-user' at 'userPoolId' failed to satisfy constraint: Member must satisfy regular expression pattern: [\w-]+_[0-9a-zA-Z]+
on infra/cognito.tf line 5, in resource "aws_cognito_user_pool_client" "app_client":
5: resource "aws_cognito_user_pool_client" "app_client" {`
It seems the format of the ID is not correct. I have read this document https://www.terraform.io/docs/providers/aws/d/cognito_user_pools.html and it has a reference attribute ids - The list of cognito user pool ids.. I wonder why it gives a list of user pool id. How can I reference this ID?
I also tried to reference it as user_pool_id = data.aws_cognito_user_pools.re_user_pool.ids[0] but got an error:
Error: Invalid index
on infra/cognito.tf line 8, in resource "aws_cognito_user_pool_client" "app_client":
8: user_pool_id = data.aws_cognito_user_pools.re_user_pool.ids[0]
This value does not have any indices.
The re_user_pool referenced above is defined here:
resource "aws_cognito_user_pool" "re_user_pool" {
name = "re-user"
}
I came across your question while working through this same problem. I see the question is several months old, but I'm still going to add an answer for anyone else that ends up here like I did.
First, the solution is to convert the ids value from a set to a list via the tolist function and then access it as you would any terraform list.
Caveat: In my case, I have ensured I only have one user pool for a given name, but you could get multiple user pools if you haven't followed this convention. This solution will not be a complete solution for that situation, but perhaps it will still point in the right direction.
Example code:
data "aws_cognito_user_pools" "test" {
name = "a_name"
}
output "test" {
value = "${tolist(data.aws_cognito_user_pools.test.ids)[0]"
}
Second, how I arrived at it:
I added an output block so I could see what I was working with and I commented out the problematic lines in my terraform file so I could successfully execute terraform apply. Next I ran terraform apply followed by terraform output --json (note: the apply must be successful for output to have the latest values).
Example temporary output block:
output "test" {
value = "${data.aws_cognito_user_pools.test}" // output top-level object for debugging
}
Relevant terraform apply output:
test = {
"arns" = [
"<redacted>",
]
"id" = "a_name"
"ids" = [
"us-east-1_<redacted>",
]
"name" = "a_name"
}
Relevant terraform output --json output:
"test": {
"sensitive": false,
"type": [
"object",
{
"arns": [
"set",
"string"
],
"id": "string",
"ids": [
"set",
"string"
],
"name": "string"
}
],
"value": {
"arns": [
"<redacted>"
],
"id": "a_name",
"ids": [
"us-east-1_<redacted>"
],
"name": "a_name"
}
}
As you can see, the ids portion is a set of type string. I decided to try converting ids to a list to see if I could then access the 0 index and it worked. I feel like this could be a terraform bug, but I haven't filed an issue yet.
I've been trying to find some documentation for jsonpatch==1.16 on how to make PATCH paths case-insensitive. The problem is that:
PATCH /users/123
[
{"op": "add", "path": "/firstname", "value": "Spammer"}
]
Seems to mandate that the DB (MySQL / MariaDB) column is also exactly firstname and not for example Firstname or FirstName. When I change the path in the JSON to /FirstName, which is what the DB column is, then the patch works just fine. But I'm not sure if you are supposed to use CamelCase in the JSON in this case? It seems a bit non-standard.
How can I make jsonpatch at least case-insensitive? Or alternatively, is there some way to insert some mapping in the middle, for example like this:
def users_mapping(self, path):
select = {
"/firstname": "FirstName",
"/lastname": "last_name", # Just an example
}
return select.get(path, None)
Using Python 3.5, SQLAlchemy 1.1.13 and Flask-SQLAlchemy 2.2
Well, the answer is: yes, you can add mapping. Here's my implementation with some annotations:
The endpoint handler (eg. PATCH /news/123):
def patch(self, news_id):
"""Change an existing News item partially using an instruction-based JSON,
as defined by: https://tools.ietf.org/html/rfc6902
"""
news_item = News.query.get_or_404(news_id)
self.patch_item(news_item, request.get_json())
db.session.commit()
# asdict() comes from dictalchemy method make_class_dictable(news)
return make_response(jsonify(news_item.asdict()), 200)
The method it calls:
# news = the db.Model for News, from SQLAlchemy
# patchdata = the JSON from the request, like this:
# [{"op": "add", "path": "/title", "value": "Example"}]
def patch_item(self, news, patchdata, **kwargs):
# Map the values to DB column names
mapped_patchdata = []
for p in patchdata:
# Replace eg. /title with /Title
p = self.patch_mapping(p)
mapped_patchdata.append(p)
# This follows the normal JsonPatch procedure
data = news.asdict(exclude_pk=True, **kwargs)
# The only difference is that I pass the mapped version of the list
patch = JsonPatch(mapped_patchdata)
data = patch.apply(data)
news.fromdict(data)
And the mapping implementation:
def patch_mapping(self, patch):
"""This is used to map a patch "path" or "from" to a custom value.
Useful for when the patch path/from is not the same as the DB column name.
Eg.
PATCH /news/123
[{ "op": "move", "from": "/title", "path": "/author" }]
If the News column is "Title", having "/title" would fail to patch
because the case does not match. So the mapping converts this:
{ "op": "move", "from": "/title", "path": "/author" }
To this:
{ "op": "move", "from": "/Title", "path": "/Author" }
"""
# You can define arbitrary column names here.
# As long as the DB column is identical, the patch will work just fine.
mapping = {
"/title": "/Title",
"/contents": "/Contents",
"/author": "/Author"
}
mutable = deepcopy(patch)
for prop in patch:
if prop == "path" or prop == "from":
mutable[prop] = mapping.get(patch[prop], None)
return mutable