AWS Boto3 SDK Access Analyzer list_findings Input Argument Error - python-3.x

I am trying to use the AWS boto3 Python sdk to work with the Access Analyzer api. Specifically the list_findings action. The relevant api documentation is here
https://docs.aws.amazon.com/access-analyzer/latest/APIReference/API_ListFindings.html
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/accessanalyzer.html#AccessAnalyzer.Client.list_findings
According to the documentation the input parameters to this action includes
"sort": {
"attributeName": "string",
"orderBy": "string"
}
The documentation, as far as I can tell, doesn't place any restrictions on what the attributeName can be and therefore it seems that any valid field on the returned findings should be valid input. The structure of the returned findings is described as
{
"findings": [
{
"action": [ "string" ],
"analyzedAt": number,
"condition": {
"string" : "string"
},
"createdAt": number,
"error": "string",
"id": "string",
"isPublic": boolean,
"principal": {
"string" : "string"
},
"resource": "string",
"resourceOwnerAccount": "string",
"resourceType": "string",
"sources": [
{
"detail": {
"accessPointArn": "string"
},
"type": "string"
}
],
"status": "string",
"updatedAt": number
}
],
"nextToken": "string"
}
I would like to sort by the createdAt attribute on the returned findings. For all the fields I have tried they always return the same error
Error has occured in AWS Access Analyzer Integration: An error occurred (ValidationException) when calling the ListFindings operation: Invalid sort.attributeName
An example piece of code I am trying is as follows
client = aws_session(
region='region',
roleArn='roleArn',
roleSessionName='roleSessionName',
roleSessionDuration='roleSessionDuration',
)
kwargs = {
'analyzerArn': 'some_ARN_here'
}
kwargs['sort'] = {
'attributeName': 'createdAt',
'orderBy': 'ASC'
}
response = client.list_findings(**kwargs)
Without the sort arguement the code behaves as expected and returns results. No matter what attribute I use in the sort field it always returns the same error.
How do I know what the valid attributeName's are? What is the correct format for passing parameters in this case? I have not been able to find any other examples online using this particular portion of the boto3 sdk. Any insight would be appreciated.
I am looking to sort the findings in order to retrieve only the most recent. By default the oldest is returned first when specifying maxResults as an arguement.

The answer, and hopefully this saves someone a months time, is that this is not documented correctly anywhere. I had to use my browsers inspector to see the params passed in the request when performing some actions in the AWS UI for Access Analyzer and noticed they were passing UpdatedAt in the attributeName for the sort field.
This subtle distinction was breaking the API call as the U needs to be capitalized in updatedAt when being passed as a parameter. Where is this is documented? Seems nowhere. Not all the fields on findings worked as a sort attribute, such as createdAt. So in order to use the sort field you need to format as such on list_findings
kwargs['sort'] = {
'attributeName': 'UpdatedAt',
'orderBy': 'ASC'
}
This is not representative of how the fields are returned from the API and is not intuitive. The way the parameters are passed to the API are not in the same format as they are received.

Related

Acumatica MoveEntry API not generating lot number

Has anyone here used the PUT /MoveEntry call successfully before? I can make the call to create the record, but I was expecting the API to populate the lot number and it is not. It does by UI, but not by API. Is there a trick that I'm missing?
Update 1:
PUT /MoveEntry
{
"Hold": {
"value": true
},
"Details": [
{
"OrderType": {
"value": "RO"
},
"ProductionNbr": {
"value": "RO0000001"
},
"Quantity": {
"value": 1
},
"Location": {
"value": "PRODRECPT"
},
"Warehouse": {
"value": "ABBOTSFORD"
}
}
]
}
It always records the document successfully, but never has the lot number.
Could it be a lot class configuration issue?
Update 2:
Acu support agrees this looks like a defect and has passed the case on to Acu development.
I would take a look at the Numbering Sequence called AMBatch as that is the one that seems to be used by default for me on the Move Entry screen.
After working with Acu Support, it came back the API requires the "OperationNbr" field to be populated with the Bill of Materials operation number. Then the lot number is generated as expected.

Azure Data Factory Copy Activity error mapping JSON to SQL

I have an Azure Data Factory Copy Activity that is using a REST request to elastic search as the Source and attempting to map the response to a SQL table as the Sink. Everything works fine except when it attempts to map the data field that contains the dynamic JSON. I get the following error:
{
"errorCode": "2200",
"message": "ErrorCode=UserErrorUnsupportedHierarchicalComplexValue,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The retrieved type of data JObject with value {\"name\":\"department\"} is not supported yet, please either remove the targeted column or enable skip incompatible row to skip them.,Source=Microsoft.DataTransfer.Common,'",
"failureType": "UserError",
"target": "CopyContents_Paged",
"details": []
}
Here's an example of my mapping configuration:
"type": "TabularTranslator",
"mappings": [
{
"source": {
"path": "['_source']['id']"
},
"sink": {
"name": "ContentItemId",
"type": "String"
}
},
{
"source": {
"path": "['_source']['status']"
},
"sink": {
"name": "Status",
"type": "Int32"
}
},
{
"source": {
"path": "['_source']['data']"
},
"sink": {
"name": "Data",
"type": "String"
}
}
],
"collectionReference": "$['hits']['hits']"
}
The JSON in the data object is dynamic so I'm unable to do an explicit mapping for the nested fields within it. That's why I'm trying to just store the entire JSON object under data in a column of a SQL table.
How can I adjust my mapping configuration to allow this to work properly?
I posted this question on the MSDN forums and I was told that if you are using a tabular sink you can set this option "mapComplexValuesToString": true and it should allow complex JSON properties to get mapped correctly. This resolved my ADF copy activity issue.
I have the same problem a few days ago. You need to convert your JSON object to a Json String. It will solve your mapping problem (UserErrorUnsupportedHierarchicalComplexValue).
Try it and tell me if also resolves your error.

How to retrieve Work Item linked to specific commit - Azure Devops REST API

I need to be able to retrieve the linked work item of any given specific commit. I'm currently using the following api call
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/commits/{commitId}?api-version=5.0
with the following response
{
"parents": [],
"treeId": "7fa1a3523ffef51c525ea476bffff7d648b8cb3d",
"push": {
"pushedBy": {
"id": "8c8c7d32-6b1b-47f4-b2e9-30b477b5ab3d",
"displayName": "Chuck Reinhart",
"uniqueName": "fabrikamfiber3#hotmail.com",
"url": "https://vssps.dev.azure.com/fabrikam/_apis/Identities/8c8c7d32-6b1b-47f4-b2e9-30b477b5ab3d",
"imageUrl": "https://dev.azure.com/fabrikam/_api/_common/identityImage?id=8c8c7d32-6b1b-47f4-b2e9-30b477b5ab3d"
},
"pushId": 1,
"date": "2014-01-29T23:33:15.2434002Z"
},
"commitId": "be67f8871a4d2c75f13a51c1d3c30ac0d74d4ef4",
"author": {
"name": "Chuck Reinhart",
"email": "fabrikamfiber3#hotmail.com",
"date": "2014-01-29T23:32:09Z"
},
"committer": {
"name": "Chuck Reinhart",
"email": "fabrikamfiber3#hotmail.com",
"date": "2014-01-29T23:32:09Z"
},
"comment": "First cut\n",
"url": "https://dev.azure.com/fabrikam/_apis/git/repositories/278d5cd2-584d-4b63-824a-2ba458937249/commits/be67f8871a4d2c75f13a51c1d3c30ac0d74d4ef4",
"remoteUrl": "https://dev.azure.com/fabrikam/_git/Fabrikam-Fiber-Git/commit/be67f8871a4d2c75f13a51c1d3c30ac0d74d4ef4",
"_links": {
"self": {
"href": "https://dev.azure.com/fabrikam/_apis/git/repositories/278d5cd2-584d-4b63-824a-2ba458937249/commits/be67f8871a4d2c75f13a51c1d3c30ac0d74d4ef4"
},
"repository": {
"href": "https://dev.azure.com/fabrikam/_apis/git/repositories/278d5cd2-584d-4b63-824a-2ba458937249"
},
"changes": {
"href": "https://dev.azure.com/fabrikam/_apis/git/repositories/278d5cd2-584d-4b63-824a-2ba458937249/commits/be67f8871a4d2c75f13a51c1d3c30ac0d74d4ef4/changes"
},
"web": {
"href": "https://dev.azure.com/fabrikam/_git/Fabrikam-Fiber-Git/commit/be67f8871a4d2c75f13a51c1d3c30ac0d74d4ef4"
},
"tree": {
"href": "https://dev.azure.com/fabrikam/_apis/git/repositories/278d5cd2-584d-4b63-824a-2ba458937249/trees/7fa1a3523ffef51c525ea476bffff7d648b8cb3d"
}
}
}
from https://learn.microsoft.com/en-us/rest/api/azure/devops/git/commits/get?view=azure-devops-rest-5.0 and am missing a way to see what work item its linked to or if it is linked at all. Does anyone know of a way to get this information? Thanks
You could use the Get Commits API, docs here. The base request looks like:
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/commits?api-version=5.0
You could then add the following parameters:
fromCommitId - string - If provided, a lower bound for filtering commits alphabetically
toCommitId - string - If provided, an upper bound for filtering commits alphabetically
includeWorkItems - boolean - Whether to include linked work items
So that your final query would look something like, with your toCommitId and fromCommitId parameters being your commit id that you are after (the documentation doesn't specificy whether these are inclusive or exclusive so your might have to tweak this slightly):
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/commits?includeWorkItems=true&.toCommitId={searchCriteria.toCommitId}&fromCommitId={searchCriteria.fromCommitId}&api-version=5.0
The result should contain a workItems property inside each commit object of the response as per this documentation.
Note:
Parameters that use the searchCriteria prefix in their name can be specified without it as query parameters, e.g. searchCriteria.$top -> $top
There is also:
ids - array - If provided, specifies the exact commit ids of the commits to fetch. May not be combined with other parameters.
Which could allow you to forgo passing in the to and from commit ids but the docs state that it May not be combined with other parameters - even though the example request does combine it with other parameters. I haven't tried this myself so please do comment when you find out whether you go with from-to id or just ids.
OPs action
The OP ended up using the following request as they didn't mind all commits being returned:
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/commits?includeWorkItems=true&api-version=5.0

Azure API Management: Discriminate operations by both path and query parameters

I have a backend API (that implements ApiController) which I'd like to put behind an APIM API. ApiController allows us to discriminate between two different GET operations based on the query parameters that are passed in. When I attempt to define these endpoints in APIM, I get the following error:
The message suggests an endpoint is defined solely by the path and operation. But that seems to contradict documentation I found here which suggests there's a way to differentiate between operations based on query parameters:
Required parameters across both path and query must have unique names.
(In OpenAPI a parameter name only needs to be unique within a
location, for example path, query, header. However, in API Management
we allow operations to be discriminated by both path and query
parameters (which OpenAPI doesn't support). That's why we require
parameter names to be unique within the entire URL template.)
I have an ApiController that defines two different Get operations, differing only by the query parameters. How do I represent that in my APIM API?
The problem comes from multiple operation objects with the same OperationId. This is invalid swagger. In the Swagger file did not match the name of the selected API, so change the title attribute of the doc tag to match the destination API it worked..
Here is a similar SO thread you could refer to.
I got my answer from Azure support, sharing the info here:
APIM endpoints are defined by the path, method, and the name you assign to the operation. To differentiate between two GET endpoints to the same controller, differing only by query parameters, you need to hardcode required query parameters into the path. See the following two images:
In the latter image, the hardcoded query parameter is classified by the UI as a template parameter, but it still behaves like a regular query parameter. Query arguments defined in this way:
Are required
Can appear in anywhere in a request's list of query arguments
Are not case-sensitive
Are listed as a "Request Parameter" along side all other path parameters and query arguments in the Development Portal
Edit:
There's a typo in the screenshots. The URLs are case sensitive, and the casing of "blah" were different in each case. Here's what the the Open API Specification looks like when the casing is consistent. The overloaded path (with the query parameter hardcoded into the path template) appears in a section called x-ms-paths:
{
"swagger": "2.0",
"info": {
"title": "Echo API",
"version": "1.0"
},
"host": "<hostUrl>",
"basePath": "/echo",
"schemes": ["https"],
"securityDefinitions": {
"apiKeyHeader": {
"type": "apiKey",
"name": "Ocp-Apim-Subscription-Key",
"in": "header"
},
"apiKeyQuery": {
"type": "apiKey",
"name": "subscription-key",
"in": "query"
}
},
"security": [{
"apiKeyHeader": []
}, {
"apiKeyQuery": []
}],
"paths": {
"/Blah": {
"get": {
"operationId": "blah",
"summary": "Blah",
"responses": {}
}
}
},
"tags": [],
"x-ms-paths": {
"/Blah?alpha={alpha}": {
"get": {
"operationId": "blah2",
"summary": "Blah2",
"parameters": [{
"name": "alpha",
"in": "query",
"required": true,
"type": "string"
}],
"responses": {}
}
}
}
}

Cannot query on a date range, get back no results each time

I'm having a hard time understanding why I keep getting 0 results back from a query I am trying to perform. Basically I am trying to return only results within a date range. On a given table I have a createdAt which is a DateTime scalar. This basically gets automatically filled in from prisma (or graphql, not sure which ones sets this). So on any table I have the createdAt which is a DateTime string representing the DateTime when it was created.
Here is my schema for this given table:
type Audit {
id: ID! #unique
user: User!
code: AuditCode!
createdAt: DateTime!
updatedAt: DateTime!
message: String
}
I queried this table and got back some results, I'll share them here:
"getAuditLogsForUser": [
{
"id": "cjrgleyvtorqi0b67jnhod8ee",
"code": {
"action": "login"
},
"createdAt": "2019-01-28T17:14:30.047Z"
},
{
"id": "cjrgn99m9osjz0b67568u9415",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T18:06:03.254Z"
},
{
"id": "cjrgnhoddosnv0b67kqefm0sb",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T18:12:35.631Z"
},
{
"id": "cjrgnn6ufosqo0b67r2tlo1e2",
"code": {
"action": "login"
},
"createdAt": "2019-01-28T18:16:52.850Z"
},
{
"id": "cjrgq8wwdotwy0b67ydi6bg01",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T19:29:45.616Z"
},
{
"id": "cjrgqaoreoty50b67ksd04s2h",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T19:31:08.382Z"
}]
Here is my getAuditLogsForUser schema definition
getAuditLogsForUser(userId: String!, before: DateTime, after: DateTime): [Audit!]!
So to test I would want to get all the results in between the last and first.
2019-01-28T19:31:08.382Z is last
2019-01-28T17:14:30.047Z is first.
Here is my code that would inject into the query statement:
if (args.after && args.before) {
where['createdAt_lte'] = args.after;
where['createdAt_gte'] = args.before;
}
console.log(where)
return await context.db.query.audits({ where }, info);
In playground I execute this statement
getAuditLogsForUser(before: "2019-01-28T19:31:08.382Z" after: "2019-01-28T17:14:30.047Z") { id code { action } createdAt }
So I want anything that createdAt_lte (less than or equal) set to 2019-01-28T17:14:30.047Z and that createdAt_gte (greater than or equal) set to 2019-01-28T19:31:08.382Z
However I get literally no results back even though we KNOW there is results.
I tried to look up some documentation on DateTime scalar in the graphql website. I literally couldn't find anything on it, but I see it in my generated prisma schema. It's just defined as Scalar. With nothing else special about it. I don't think I'm defining it elsewhere either. I am using Graphql-yoga if that makes any difference.
(generated prisma file)
scalar DateTime
I'm wondering if it's truly even handling this as a true datetime? It must be though because it gets generated as a DateTime ISO string in UTC.
Just having a hard time grasping what my issue could possibly be at this moment, maybe I need to define it in some other way? Any help is appreciated
Sorry I misread your example in my first reply. This is what you tried in the playground correct?
getAuditLogsForUser(
before: "2019-01-28T19:31:08.382Z",
after: "2019-01-28T17:14:30.047Z"
){
id
code { action }
createdAt
}
This will not work since before and after do not refer to time, but are cursors used for pagination. They expect an id. Since id's are also strings this query does not throw an error but will not find anything. Here is how pagination is used: https://www.prisma.io/docs/prisma-graphql-api/reference/queries-qwe1/#pagination
What I think you want to do is use a filter in the query. For this you can use the where argument. The query would look like this:
getAuditLogsForUser(
where:{AND:[
{createdAt_lte: "2019-01-28T19:31:08.382Z"},
{createdAt_gte: "2019-01-28T17:14:30.047Z"}
]}
) {
id
code { action }
createdAt
}
Here are the docs for filtering: https://www.prisma.io/docs/prisma-graphql-api/reference/queries-qwe1/#filtering
OK so figured out it had to do with the fact that I used "after" and "before" as an argument variable. I have no clue why this completely screws everything up, but it just wont return ANY results if you have this as a argument. Very strange. Must be abstracting some other variable somehow, probably a bug on graphql's end.
As soon as I tried a new variable name, viola, it works.
This is also possible:
const fileData = await prismaClient.fileCuratedData.findFirst({
where: {
fileId: fileId,
createdAt: {
gte: fromdate}
},
});

Resources