I have 4 string variables like this:
a = 'a long string'
b = 'a longer string'
c = 'a path (with a single slash \)'
d = 'another path (with a single slash \)'
I am supposed to be adding this to a variable, which is a list of dictionaries. Something like this:
var = [
{
"op": "add",
"path": "/fields/System.Title",
"from": null,
"value": a
},
{
"op": "add",
"path": "/fields/System.Description",
"from": null,
"value": b
},
{
"op": "add",
"path": "/fields/System.AreaPath",
"from": null,
"value": c
},
{
"op": "add",
"path": "/fields/System.State",
"from": null,
"value": "New"
},
{
"op": "add",
"path": "/fields/System.IterationPath",
"from": null,
"value": d
}
]
FYI, var[3] does not take the variables I created. Only var[0], var[1], var[2] and var[4]
All this works fine. As you may have guessed by now, this is my payload for a POST operation that I am supposed to be sending (it's actually to create a work item in my Azure DevOps organization with the above parameters). Please note, the from in var only accepts null.
When I use POSTMAN to send the above request, it works (except I am not passing the variables in the body, but actually the hard coded values). When I do the same using Python, on VSCode, I am always thrown a 203, which is essentially an incorrect/incomprehensible payload. I am not able to get this working.
This is essentially the code (please assume the variables):
url = f'https://dev.azure.com/{organization}/{project}/_apis/wit/workitems/${workitemtype}?api-version=6.0'
header = {'Content-Type': 'application/json-patch+json', 'Authorization': f'Basic {PAT}'}
request = requests.post(url = url, json = var, headers = header)
I've tried everything I can think of:
request = requests.post(url = url, **data = json.dumps(var)**, headers = header),
request = requests.post(url = url, **data = json.loads(var)**, headers = header),
request = requests.post(url = url, **data = json.loads(str(var))**, headers = header) -> this because loads(var) was throwing TypeError: the JSON object must be str, bytes or bytearray, not list
I also tried to include the entire var variable as a docstring - but the problem with that is that I need to pass the variables (a, b, c and d) into it, and docstrings cannot accept them.
How may I overcome this?
I have a feeling that it's the null that's causing this issue, but I could be wrong.
I tested with your code, and i failed to pass the authorization if i defined the Authorization in the request headers. I fixed it by passing the PAT to the auth parameter in requests.post method. I made a little to your code. And it worked fine for me. See below:
import requests
if __name__ == '__main__':
a = 'a longer string'
b = 'a longer string'
c = 'project\area'
d = 'project\iteration'
var = [
{
"op": "add",
"path": "/fields/System.Title",
"from": None,
"value": a
},
{
"op": "add",
"path": "/fields/System.Description",
"from": None,
"value": b
},
{
"op": "add",
"path": "/fields/System.AreaPath",
"from": None,
"value": c
},
{
"op": "add",
"path": "/fields/System.State",
"from": None,
"value": "New"
},
{
"op": "add",
"path": "/fields/System.IterationPath",
"from": None,
"value": d
}
]
pat = "Personal access token"
url = 'https://dev.azure.com/{organization}/{project}/_apis/wit/workitems/$Task?api-version=6.1-preview.3'
header = {'Content-Type': 'application/json-patch+json'}
request = requests.post(url = url, json = var, headers = header, auth=("", pat))
However you can also check out azure-devops-python-api. See below example code to create a work item.
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v5_1.work_item_tracking.models import JsonPatchOperation
def _create_patch_operation(op, path, value):
patch_operation = JsonPatchOperation()
patch_operation.op = op
patch_operation.path = path
patch_operation.value = value
return patch_operation
def _create_work_item_field_patch_operation(op, field, value):
path = '/fields/{field}'.format(field=field)
return _create_patch_operation(op=op, path=path, value=value)
if __name__=='__main__':
a = 'a longer string'
b = 'a longer string'
c = 'project\area'
d = 'project\iteration'
# Fill in with your personal access token and org URL
personal_access_token = 'PAT'
organization_url = 'https://dev.azure.com/{org}/'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
# Get a client
wit_client = connection.clients.get_work_item_tracking_client()
patch_document=[]
patch_document.append(_create_work_item_field_patch_operation('add', 'System.Title', a))
patch_document.append(_create_work_item_field_patch_operation('add', 'System.Description', b))
patch_document.append(_create_work_item_field_patch_operation('add', 'System.AreaPath', c))
wit_client.create_work_item(patch_document, "Project", 'Task')
Related
I am using pytest and moto3 to test some code similar to this:
response = athena_client.start_query_execution(
QueryString='SELECT * FROM xyz',
QueryExecutionContext={'Database': myDb},
ResultConfiguration={'OutputLocation': someLocation},
WorkGroup=myWG
)
execution_id = response['QueryExecutionId']
if response['QueryExecution']['Status']['State'] == 'SUCCEEDED':
response = athena_client.get_query_results(
QueryExecutionId=execution_id
)
results = response['ResultSet']['Rows']
...etc
In my test I need that the values from results = response['ResultSet']['Rows'] are controlled by the test. I am using some code like this:
backend = athena_backends[DEFAULT_ACCOUNT_ID]["us-east-1"]
rows = [{"Data": [{"VarCharValue": "xyz"}]}, {"Data": [{"VarCharValue": ...}, etc]}]
column_info = [
{
"CatalogName": "string",
"SchemaName": "string",
"TableName": "xyz",
"Name": "string",
"Label": "string",
"Type": "string",
"Precision": 123,
"Scale": 123,
"Nullable": "NOT_NULL",
"CaseSensitive": True,
}
]
results = QueryResults(rows=rows, column_info=column_info)
backend.query_results[NEEDED_QUERY_EXECUTION_ID] = results
but that is not working as I guess NEEDED_QUERY_EXECUTION_ID is not known before from the test. How can I control it?
UPDATE
Based on suggestion I tried to use:
results = QueryResults(rows=rows, column_info=column_info)
d = defaultdict(lambda: results.to_dict())
backend.query_results = d
to force a return of values, but it seems not working as from the moto3's models.AthenaBackend.get_query_results, I have this code:
results = (
self.query_results[exec_id]
if exec_id in self.query_results
else QueryResults(rows=[], column_info=[])
)
return results
which will fail as the if condition won't be satifsfied.
Extending the solution of the defaultdict, you could create a custom dictionary that contains all execution_ids, and always returns the same object:
class QueryDict(dict):
def __contains__(self, item):
return True
def __getitem__(self, item):
rows = [{"Data": [{"VarCharValue": "xyz"}]}, {"Data": [{"VarCharValue": "..."}]}]
column_info = [
{
"CatalogName": "string",
"SchemaName": "string",
"TableName": "xyz",
"Name": "string",
"Label": "string",
"Type": "string",
"Precision": 123,
"Scale": 123,
"Nullable": "NOT_NULL",
"CaseSensitive": True,
}
]
return QueryResults(rows=rows, column_info=column_info)
backend = athena_backends[DEFAULT_ACCOUNT_ID]["us-east-1"]
backend.query_results = QueryDict()
An alternative solution to using custom dictionaries would to be seed Moto.
Seeding Moto ensures that it will always generate the same 'random' identifiers, which means you always know what the value of NEEDED_QUERY_EXECUTION_ID is going to be.
backend = athena_backends[DEFAULT_ACCOUNT_ID]["us-east-1"]
rows = [{"Data": [{"VarCharValue": "xyz"}]}, {"Data": [{"VarCharValue": "..."}]}]
column_info = [...]
results = QueryResults(rows=rows, column_info=column_info)
backend.query_results["bdd640fb-0667-4ad1-9c80-317fa3b1799d"] = results
import requests
requests.post("http://motoapi.amazonaws.com/moto-api/seed?a=42")
# Test - the execution id will always be the same because we just seeded Moto
execution_id = athena_client.start_query_execution(...)
Documentation on seeding Moto can be found here: http://docs.getmoto.org/en/latest/docs/configuration/recorder/index.html#deterministic-identifiers
(It only talks about seeding Moto in the context of recording/replaying requests, but the functionality can be used on it's own.)
Currently, we have have a patch endpoint for jsonschema and now we want to limit the operation that we can do while applying the patch.
The patch endpoint looks like this:
def patch(self, name, version):
...
schema = Schema.get(name, version)
serialized_schema = schema.patch_serialize()
...
data = request.get_json()
patched_schema = apply_patch(serialized_schema, data)
serialized_data, errors = update_schema_serializer.load(
patched_schema, partial=True)
...
data example:
[
{ "op": "replace", "path": "/config/notifications/", "value": "foo" },
{ "op": "add", "path": "/config/", "value": {} },
{ "op": "remove", "path": "/config/notifications/"}
]
Here we want to only have add, remove and replace operation. Is there any way to only apply patch for these operation and not for copy, move and test?
In your schema definition, define op as an enum with only the options that you want
I am currently working on a python program to query public github API url to get github user email address. The response from the python object is a huge list with a lot of dictionaries.
My code so far
import requests
import json
# username = ''
username = 'FamousBern'
base_url = 'https://api.github.com/users/{}/events/public'
url = base_url.format(username)
try:
res = requests.get(url)
r = json.loads(res.text)
# print(r) # List slicing
print(type(r)) # List that has alot dictionaries
for i in r:
if 'payload' in i:
print(i['payload'][6])
# matches = []
# for match in r:
# if 'author' in match:
# matches.append(match)
# print(matches)
# print(r[18:])
except Exception as e:
print(e)
# data = res.json()
# print(data)
# print(type(data))
# email = data['author']
# print(email)
By manually accessing this url in chrome browser i get the following
[
{
"id": "15069094667",
"type": "PushEvent",
"actor": {
"id": 32365949,
"login": "FamousBern",
"display_login": "FamousBern",
"gravatar_id": "",
"url": "https://api.github.com/users/FamousBern",
"avatar_url": "https://avatars.githubusercontent.com/u/32365949?"
},
"repo": {
"id": 332684394,
"name": "FamousBern/FamousBern",
"url": "https://api.github.com/repos/FamousBern/FamousBern"
},
"payload": {
"push_id": 6475329882,
"size": 1,
"distinct_size": 1,
"ref": "refs/heads/main",
"head": "f9c165226201c19fd6a6acd34f4ecb7a151f74b3",
"before": "8b1a9ac283ba41391fbf1168937e70c2c8590a79",
"commits": [
{
"sha": "f9c165226201c19fd6a6acd34f4ecb7a151f74b3",
"author": {
"email": "bernardberbell#gmail.com",
"name": "FamousBern"
},
"message": "Changed input functionality",
"distinct": true,
"url": "https://api.github.com/repos/FamousBern/FamousBern/commits/f9c165226201c19fd6a6acd34f4ecb7a151f74b3"
}
]
},
The json object is huge as well, i just sliced it. I am interested to get the email address in the author dictionary.
You're attempting to index into a dict() with i['payload'][6] which will raise an error.
My personal preferred way of checking for key membership in nested dicts is using the get method with a default of an empty dict.
import requests
import json
username = 'FamousBern'
base_url = 'https://api.github.com/users/{}/events/public'
url = base_url.format(username)
res = requests.get(url)
r = json.loads(res.text)
# for each dict in the list
for event in r:
# using .get() means you can chain .get()s for nested dicts
# and they won't fail even if the key doesn't exist
commits = event.get('payload', dict()).get('commits', list())
# also using .get() with an empty list default means
# you can always iterate over commits
for commit in commits:
# email = commit.get('author', dict()).get('email', None)
# is also an option if you're not sure if those keys will exist
email = commit['author']['email']
print(email)
I have the issue that I'm not able to execute the following code. The syntax seems to be okay, but when I try to execute it, I get the response, that:
Expression.Error: We cannot convert a value of type Record to type "Text".
Details:
Value=[Record]
Type=[Type]
let
body="{
""page"": ""1"",
""pageSize"": ""100"",
""requestParams"": {
""deviceUids"": [
""xxx-yyy-xxx-yyyy-xxxx"",
""yyy-xxx-yyy-xxxx-yyyy"",
""aaa-bbb-aaa-bbbb-aaaa"",
""ccc-ddd-ccc-dddd-cccc""
],
""entityColumns"": [
{
""entityId"": ""144"",
""joinColumnName"": ""device_uid"",
""columnName"": ""device_random_date""
}
],
""columnNames"": [
""ts"",
""device_uid"",
""1"",
""32"",
""55"",
""203"",
""204""
],
""startUnixTsMs"": ""1583413637000"",
""endUnixTsMs"": ""1583413640000"",
""columnFilters"": [
{
""filterType"": ""eq"",
""columnName"": ""55"",
""value"": ""1234""
}
],
""sortOrder"": [
{
""column"": ""ts"",
""order"": ""DESC""
},
{
""column"": ""55"",
""order"": ""ASC""
}
],
""entityFilters"": [
{
""entityId"": ""144"",
""entityEntryIds"": [
""12345-221-232-1231-123456""
]
}
]
}
}",
Parsed_JSON = Json.Document(body),
BuildQueryString = Uri.BuildQueryString(Parsed_JSON),
Quelle = Json.Document(Web.Contents("http://localhost:8101/device-data-reader-api/read-paginated/xxx-xxx-yyyy-yyyy", [Headers=[#"Content-Type"="application/json"], Content = Text.ToBinary(BuildQueryString)]))
in
Quelle
I tried to remove the quotes of the numbers, but this leads to the same issue, as system complains it cannot convert numbers into text.
I need the body which needs to be handed over with the request in order to do a POST request. What I'm doing wrong?
Since you seem to want to send this as application/json, I think you would change this bit in your code:
Content = Text.ToBinary(BuildQueryString)
to:
Content = Text.ToBinary(body)
and then you'd also get rid of the lines below (since you don't need them):
Parsed_JSON = Json.Document(body),
BuildQueryString = Uri.BuildQueryString(Parsed_JSON),
I don't think you would need Uri.BuildQueryString unless you wanted to send as application/x-www-form-urlencoded (i.e. URL encoded key-value pairs).
Unrelated: If it helps, you can build the structure in M and then use JSON.FromValue to turn the structure into bytes which can be put directly into the POST body. Untested example is below.
let
body = [
page = "1",
pageSize = "100",
requestParams = [
deviceUids = {
"xxx-yyy-xxx-yyyy-xxxx",
"yyy-xxx-yyy-xxxx-yyyy",
"aaa-bbb-aaa-bbbb-aaaa",
"ccc-ddd-ccc-dddd-cccc"
},
entityColumns = {
[
entityId = "144",
joinColumnName = "device_uid",
columnName = "device_random_date"
]
},
columnNames = {
"ts",
"device_uid",
"1",
"32",
"55",
"203",
"204"
},
startUnixTsMs = "1583413637000",
endUnixTsMs = "1583413640000",
columnFilters = {
[
filterType = "eq",
columnName = "55",
value = "1234"
]
},
sortOrder = {
[
column = "ts",
order = "DESC"
],
[
column = "55",
order = "ASC"
]
},
entityFilters = {
[
entityId = "144",
entityEntryIds = {
"12345-221-232-1231-123456"
}
]
}
]
],
Quelle = Json.Document(
Web.Contents(
"http://localhost:8101/device-data-reader-api/read-paginated/xxx-xxx-yyyy-yyyy",
[
Headers = [#"Content-Type" = "application/json"],
Content = Json.FromValue(body)
]
)
)
in
Quelle
It might look a little weird (since M uses [] instead of {}, {} instead of [] and = instead of :), but just mentioning in case it helps.
I am new to Python. I am writhing a code to generate a excel file having the data sourced by calling API and correlate those to get desired result.
basically taking input from one database and search that in others and fetch related information.
The 4 databases have below data :
EEp
---------------------
{u'data': [{u'_id': u'5c30702c8ca9f51da8178df4',
u'encap': u'vlan-24',
u'ip': u'7.12.12.16',
u'mac': u'5B:P9:01:9E:42:08'}]}
PathEp
-----------
{u'data': [{u'_id': u'5c54a81a8ca9f51da84ae08e',
u'paths': u'paths-1507',
u'endpoint': u'eth1/10',
u'cep': u'5B:P9:01:9E:42:08',
u'tenant': u'ESX'}]}
ip4_address
-----------------------
{u'data': [{u'Allocation': u'Build_Reserved',
u'address': u'7.12.12.16',
u'name': u'fecitrix-1',
u'state': u'RESERVED'}]}
asset
---------------
{u'data': [{u'_id': u'57ccce8110dd54f02881fedc',
u'client': u'CES',
u'hostname': u'fecitrix-1'
u'os_team': u'Window'}]}
Logic:
If "mac" of EEp and "cep" of PathEp is same than take "encap","ip" ,"mac"
"paths" ,'endpoint","cep" and "tenant" (these values need to be exported
to excel)
Take ip of EEp and search in "ip4_address"
and get the "name" from ip4_address ( name need to be exported to excel).
If "name" of ip4_address is equal to "hostname" of database "asset" then take
"client" and "os_team" ( export that to excel)
I have written the script but not getting the desired result.
def get_host_details(self):
data = {
"find": {
"hostname": self.controller
},
"projection":{
"tenant": 1,
"paths": 1,
"endpoint":1
}
}
host_details = self.post("https://database.app.com/api/data/devices/PathEp/find", data)
#print host_details
hosts = []
for record in host_details:
if "mig" not in record["endpoint"]:
hosts.append(record)
return hosts
def get_ipaddress(self, controller):
host_record = {"tenant": "UNKNOWN",
"paths": "UNKNOWN",
"endpoint": "UNKNOWN",
"ip": "UNKNOWN",
"mac": "UNKNOWN",
"encap": "UNKNOWN"}
data = {
"find": {
"hostname": controller,
"ip": {
"$ne": "0.0.0.0"
}
},
"projection": {
"ip": 1,
"mac":1,
"encap":1,
}
}
endpoints = self.post("https://database.app.com/api/data/devices/EEp/find", data)
IPAM = self.get_dns()
print endpoints
host_details = self.get_host_details()
host_details_record = []
for record in endpoints:
for host in host_details:
if record["mac"] == host["cep"]:
host_record = {"tenant": host["tenant"],
"paths": host["paths"],
"endpoint": host["endpoint"],
"ip": record["ip"],
"mac": record["mac"],
"encap": record["encap"]}
host_details_record.append(host_record)
self.get_excel(host_details_record)
def get_dns(self, endpoints):
ip_dns_record = []
for each_endpoint in endpoints:
data = {
"find":
{
"address": {
"$eq": each_endpoint["ip"]
},
},
"projection":
{
"name": 1
}
}
dns_record = {"client":"UNKNOWN",
"os_team":"UNKNOWN",
ipam_record = self.post("https://database.app.com/api/data/"
"internal/ip4_address/find", data)
if ipam_record:
dns_record["ip_address"] = each_endpoint["ip"]
dns_record["hostname"] = ipam_record[0]["name"]
dns_record = self.get_remedy_details(ipam_record[0]["name"],
dns_record)
ip_dns_record.append(dns_record)
else:
dns_record["ip_address"] = each_endpoint["ip"]
dns_record["hostname"] = "UNKNOWN"
ip_dns_record.append(dns_record)
self.get_excel(ip_dns_record)
def get_remedy_details(self, hostname, dns_record):
data = {
"find":
{
"hostname": hostname.upper(),
}
}
remedy_data = self.post("https://database.app.com/api/data/internal/asset/find", data)
print(remedy_data)
#remedy_data = remedy_data["data"]
if remedy_data:
dns_record["client"] = remedy_data[0].get("client","UNKNOWN")
dns_record["os_team"] = remedy_data[0].get("os_team", "UNKNOWN")
else:
dns_record["client"] = "UNKNOWN"
dns_record["os_team"] = "UNKNOWN"
return dns_record
def get_excel(self, ip_dns_record):
filename = self.controller + ".xls"
excel_file = xlwt.Workbook()
sheet = excel_file.add_sheet('HOSTLIST')
sheet.write(0, 0, "IP Address")
sheet.write(0, 1, "HostName")
sheet.write(0, 2, "Client")
sheet.write(0, 3, "OS Team")
for count in xrange(1, len(ip_dns_record)+1):
sheet.write(count, 0,ip_dns_record[count - 1]["ip_address"])
sheet.write(count, 1,ip_dns_record[count - 1]["hostname"])
sheet.write(count, 2,ip_dns_record[count - 1]["client"])
sheet.write(count, 3,ip_dns_record[count - 1]["os_team"])
excel_file.save(filename)
if __name__ == "__main__":
controller = sys.argv[1]
OBJ = ACIHostList(controller)
print "SCRIPT COMPLETED"
No idea where I am going wrong and what needs to be done .
Your question leaves too much out. You should include all errors that you get. You should also comment your code as well so we can understand what you are trying to achieve in each step.
This is not an answer but something to try:
Rather than trying to wrap your head around a module like excel, wright your data to a simple CSV file. A CSV file can be opened up in excel and it formats correctly but is a lot easier to create.
import csv
data = [["a", "b"], ["c", "d"]]
with open("file.csv", "w+") as csv_file:
create_csv = csv.writer(csv_file)
create_csv .writerows(data)
simply grab all your data into a 2D list and using the above code dump it into a file so you can easily read it.
check the output of the file and see if you are getting the data you expect.
If you are not getting the desired data into this CSV file then there is an issue with your database queries.