Python: how to replace by blank or NA as exception in a list of properties? - python-3.x

I tried google some hours without solution.
So created this account try to ask by myself.
I am using python to generate a user list and export as csv.
I can print(Users['Id'],Users['Name']) without error as all users have these two fields.
But if I print(Users['Id'],Users['Name'],Users['Email']),
I met errors, if I write try and exception as pass, I got only one user result S3 alex as only alex has the field Email.
May I know if there is a way that I can put all the 6 field names as the table header.
Then whenever a user has no value on any fields, just put NA or leave it blank ? Thanks.
My code looks like this:
fo=open('Userlist.csv','w',newline='')
data_obj=csv.writer(fo)
data_obj.writerow(['cnt','Name','Id'])
cnt=1
result = get_users()
for user in result['Users']:
print(user['Name'],user['Id']) # to see result
data_obj.writerow([user['Name'],user['Id']]) # to write data into csv rows
row_number += 1
fo.close()
#If I print(result), get below result with different fields:
"Users": [
{
"Id": "S-1xxxx",
"Name": "S1 Peter",
"State": "DISABLED",
"UserRole": "USER"
},
{
"Id": "S-2xxxx",
"Name": "S2 Mary",
"State": "DISABLED",
"UserRole": "USER"
},
{
"Id": "S-3xxxx",
"Email": "alex#domain.com",
"Name": "S3 alex",
"State": "ENABLED",
"UserRole": "USER",
"EnabledDate": "2020-1-5"
},
{
"Id": "S-4xxxx",
"Name": "S3 brand",
"State": "DELETED",
}]
[expected result][1]
[1]: https://i.stack.imgur.com/fIMMB.png

with open("Userlist.csv", "w") as f:
w = csv.writer(f)
w.writerow(["Id", "Name", "Email"])
for user in users:
w.writerow([user.get("Id"), user.get("Name"), user.get("Email")])
The with keyword causes python to use a context manager and means you don't need to worry about closing your file. It's not vital to this solution, but it's good practice.
Using get() looks for your key and returns None if it doesn't exist, thus not throwing an error if it doesn't.

Related

How to get the user information using the user lookup ID from fields property when accessing items from a share-point list using Ms Graph API

I am accessing a Share-point list using the MS graph API endpoint:
https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items?expand=fields
I am getting the list items just fine, but I also want to get the user information attached in each field. The data item returned looks like this:
{
...other properties,
"fields": {
"#odata.etag": "\"eTag,1\"",
"id": "1",
"ContentType": "Item",
"Title": "<Some Title>",
"Modified": "<modified dateTime>",
"Created": "<created dateTime>",
"AuthorLookupId": "12",
"EditorLookupId": "12",
"_UIVersionString": "1.0",
"Attachments": false,
"Edit": "",
"LinkTitleNoMenu": "<num>",
"LinkTitle": "<num>",
"ItemChildCount": "0",
"FolderChildCount": "0",
"_ComplianceFlags": "",
"_ComplianceTag": "",
"_ComplianceTagWrittenTime": "",
"_ComplianceTagUserId": "",
"Status_Name": "<status_name>",
"Title0": "<some_title>",
"Dept": "Dept A",
"Emp_LeadLookupId": "200", //This is the user whose details I need(email-id)
"Quality_Approver": "<some_user>"
}
}
How do I get the user's details as well and not just a LookupId, OR how can I use the look up ID to get the said user's information?
I searched above and beyond but didn't find anything relevant. Any help is greatly appreciated!
Currently Microsoft Graph is not support the function of finding users through the lookup column.

How to read nested API links within the JSON file

If my JSON file is coming out like this, what needs to happen is go to the API link within the u_parent and populate the values from that API link with sysparm_display_value=true into the df. Possible? I need to do this because this API link is giving me the same name and parent and only the link in u_parent will give me the correct parent details.
{
"u_name": "******",
"u_parent": {
"display_value": "*****",
"link": "https://*****.******.com/api/now/table/u_region_hierarchies/ed7f652f1b29341051380e93cc4bcbd7"
},
"sys_id": "159967df1b75601070bfdb9cbc4bcb35",
"sys_updated_by": "mlarcheveque",
"sys_created_on": "01/24/2021 17:31:26",
"sys_mod_count": "1",
"u_active": "true",
"u_region_id": "**********",
"sys_updated_on": "07/30/2021 14:13:33",
"sys_tags": "",
"sys_created_by": "admin"
},
The API link from that u_parent displays the following values and i want the display value from u_parent
{
"result": {
"u_name": "*****",
"u_parent": {
"display_value": "*****",
"link": "https://*****.*****.com/api/now/table/u_region_hierarchies/6d7f252f1b29341051380e93cc4bcbd7"
},
"sys_id": "217f652f1b29341051380e93cc4bcbd4",
"sys_updated_by": "mlarcheveque",
"u_id": "*****",
"sys_created_on": "07/30/2021 14:11:49",
"sys_mod_count": "0",
"sys_updated_on": "07/30/2021 14:11:49",
"sys_tags": "",
"sys_created_by": "mlarcheveque"
}
}
So i am thinking this would involve a do while loop that goes through each row and gets the value from the nested API link

Azure Search match against two properties of the same object

I would like to do a query matches against two properties of the same item in a sub-collection.
Example:
[
{
"name": "Person 1",
"contacts": [
{ "type": "email", "value": "person.1#xpto.org" },
{ "type": "phone", "value": "555-12345" },
]
}
]
I would like to be able to search by emails than contain xpto.org but,
doing something like the following doesn't work:
search.ismatchscoring('email','contacts/type,','full','all') and search.ismatchscoring('/.*xpto.org/','contacts/value,','full','all')
instead, it will consider the condition in the context of the main object and objects like the following will also match:
[
{
"name": "Person 1",
"contacts": [
{ "type": "email", "value": "555-12345" },
{ "type": "phone", "value": "person.1#xpto.org" },
]
}
]
Is there any way around this without having an additional field that concatenates type and value?
Just saw the official doc. At this moment, there's no support for correlated search:
This happens because each clause applies to all values of its field in
the entire document, so there's no concept of a "current sub-document
https://learn.microsoft.com/en-us/azure/search/search-howto-complex-data-types
and https://learn.microsoft.com/en-us/azure/search/search-query-understand-collection-filters
The solution I've implemented was creating different collections per contact type.
This way I'm able to search directly in, lets say, the email collection without the need for correlated search. It might not be the solution for all cases but it works well in this case.

How to get ALL likes for a given media-id

I am trying to iterate over the user ID of each like for a given {media_id}
https://api.instagram.com/v1/media/{media-id}/likes?access_token=ACCESS-TOKEN
is returning something like this (a data array of approx 300 likes)
{
"data": [{
"username": "jack",
"first_name": "Jack",
"last_name": "Dorsey",
"type": "user",
"id": "66"
},
{
"username": "sammyjack",
"first_name": "Sammy",
"last_name": "Jack",
"type": "user",
"id": "29648"
}]
}
The problem is that it does not return ALL the likes, or any pagination feature.
Is there any workaround to get ALL likes for a given {media_ID}?
You're using the correct API endpoint to get media likes, however this endpoint has a limitation. It only returns a maximum of 100-120 likes per media with no pagination.
Unfortunately there is no workaround!
The same limitation applies for the comments endpoint.
Check this Python library out.
Then you can use this sample code I made; however, It will only get 1000 of the most recent likes.
from InstagramAPI import InstagramAPI
likes_list = []
def get_likes_list(username):
API.searchUsername(username) #Gets most recent post from user
info = API.LastJson
username_id = info['user']['pk']
user_posts = API.getUserFeed(username_id)
info = API.LastJson
media_id = info['items'][0]['id']
API.getMediaLikers(media_id)
f = API.LastJson['users']
for x in f:
likes_list.append(x['username'])
get_likes_list("tailopez")
print(likes_list)

How to search through data with arbitrary amount of fields?

I have the web-form builder for science events. The event moderator creates registration form with arbitrary amount of boolean, integer, enum and text fields.
Created form is used for:
register a new member to event;
search through registered members.
What is the best search tool for second task (to search memebers of event)? Is ElasticSearch well for this task?
I wrote a post about how to index arbitrary data into Elasticsearch and then to search it by specific fields and values. All this, without blowing up your index mapping.
The post is here: http://smnh.me/indexing-and-searching-arbitrary-json-data-using-elasticsearch/
In short, you will need to do the following steps to get what you want:
Create a special index described in the post.
Flatten the data you want to index using the flattenData function:
https://gist.github.com/smnh/30f96028511e1440b7b02ea559858af4.
Create a document with the original and flattened data and index it into Elasticsearch:
{
"data": { ... },
"flatData": [ ... ]
}
Optional: use Elasticsearch aggregations to find which fields and types have been indexed.
Execute queries on the flatData object to find what you need.
Example
Basing on your original question, let's assume that the first event moderator created a form with following fields to register members for the science event:
name string
age long
sex long - 0 for male, 1 for female
In addition to this data, the related event probably has some sort of id, let's call it eventId. So the final document could look like this:
{
"eventId": "2T73ZT1R463DJNWE36IA8FEN",
"name": "Bob",
"age": 22,
"sex": 0
}
Now, before we index this document, we will flatten it using the flattenData function:
flattenData(document);
This will produce the following array:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "2T73ZT1R463DJNWE36IA8FEN"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Bob"
},
{
"key": "age",
"type": "long",
"key_type": "age.long",
"value_long": 22
},
{
"key": "sex",
"type": "long",
"key_type": "sex.long",
"value_long": 0
}
]
Then we will wrap this data in a document as I've showed before and index it.
Then, the second event moderator, creates another form having a new field, field with same name and type, and also a field with same name but with different type:
name string
city string
sex string - "male" or "female"
This event moderator decided that instead of having 0 and 1 for male and female, his form will allow choosing between two strings - "male" and "female".
Let's try to flatten the data submitted by this form:
flattenData({
"eventId": "F1BU9GGK5IX3ZWOLGCE3I5ML",
"name": "Alice",
"city": "New York",
"sex": "female"
});
This will produce the following data:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "F1BU9GGK5IX3ZWOLGCE3I5ML"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Alice"
},
{
"key": "city",
"type": "string",
"key_type": "city.string",
"value_string": "New York"
},
{
"key": "sex",
"type": "string",
"key_type": "sex.string",
"value_string": "female"
}
]
Then, after wrapping the flattened data in a document and indexing it into Elasticsearch we can execute complicated queries.
For example, to find members named "Bob" registered for the event with ID 2T73ZT1R463DJNWE36IA8FEN we can execute the following query:
{
"query": {
"bool": {
"must": [
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "eventId"}},
{"match": {"flatData.value_string.keyword": "2T73ZT1R463DJNWE36IA8FEN"}}
]
}
}
}
},
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "name"}},
{"match": {"flatData.value_string": "bob"}}
]
}
}
}
}
]
}
}
}
ElasticSearch automatically detects the field content in order to index it correctly, even if the mapping hasn't been defined previously. So, yes : ElasticSearch suits well these cases.
However, you may want to fine tune this behavior, or maybe the default mapping applied by ElasticSearch doesn't correspond to what you need : in this case, take a look at the default mapping or, for even further control, the dynamic templates feature.
If you let your end users decide the keys you store things in, you'll have an ever-growing mapping and cluster state, which is problematic.
This case and a suggested solution is covered in this article on common problems with Elasticsearch.
Essentially, you want to have everything that can possibly be user-defined as a value. Using nested documents, you can have a key-field and differently mapped value fields to achieve pretty much the same.

Resources