Related
I am implementing an API that returns how many users are using a particular app.
For example, say I want to return data that says
10 people are using only App1, 8 are using only App2, 8 are using only App3, and 15 are using both App1 and App2, and 20 are using all App1, App2, and App3.
How do we design the response structure in JSON?
I thought returning it in comma separated format
{
"App1": 10,
"App2": 8,
"App3": 8,
"App1,App2": 15,
"App1,App2,App3": 20
}
Is this format is right and semantically correct?
I thought of Array as well,
[
{"key": ["App1"], "count": 10},
{"key": ["App2"], "count": 8},
{"key": ["App3"], "count": 8},
{"key": ["App1", "App2"], "count": 15},
{"key": ["App1", "App2", "App3"], "count": 20}
]
but was doubtful on this on whether it is semantically correct.
Is there any better way? What is the best way to represent this data?
Think of this problem as three overlapping population in a Venn diagram. The best way to represent a Venn diagram is to represent each section as an ordered list of constituents and the number associated with it. This will uniquely represent each section to make a whole.
Now this goal is achieved by both the solutions. But the CSV as the keys of the JSON has a couple of flaws:
The Typescript definition will be difficult. In the first case, your Typescript definition would be a Record<string, number> which is not as strongly typed (read less specificity) as the second one, which is
type App = 'App1' | 'App2' | 'App3';
type AppUsage = Array<{ key: Array<App>, count: number }>;
The first type is brittle. What if an app name has a comma? Your clients have to know the format. How would they differentiate between, say, "App1,App2" vs "App1, App2" (notice the space after the comma?). It is generally suggested to not use format dependent identifiers. Arrays are easier to work with and provide a better consistency.
If you use GraphQL, using plain JSON, which is what the first approach is, is generally considered a bad practice.
So, the bottomline this should be the better way,
type App = 'App1' | 'App2' | 'App3';
type AppUsage = Array<{ key: Array<App>, count: number }>;
const appUsage: AppUsage = [
{"key": ["App1"], "count": 10},
{"key": ["App2"], "count": 8},
{"key": ["App3"], "count": 8},
{"key": ["App1", "App2"], "count": 15},
{"key": ["App1", "App2", "App3"], "count": 20}
]
I would go with the following structure:
{
"usersCount": [
{
"apps": ["App1"],
"userCount": 10
},
{
"apps": ["App2"],
"userCount": 8
},
{
"apps": ["App3"],
"userCount": 8
},
{
"apps": ["App1", "App2"],
"userCount": 15
},
{
"apps": ["App1", "App2", "App3"],
"userCount": 20
}
]
}
This would make it easier to parse and handle multiple Apps (when compared with String split for example). To make it a little bit more strongly typed I would make App1, App2 and App3 values of an App enumeration.
I think the solution you have tried is very limited. Your solution is fine for 3 apps but let's say that you later want to add few more apps then you have add those return types as well.
Instead of the above I would like to suggest as below. This suggestion has low coupling and can work for any number of apps
{
"IndividualAppUsersCount":[10,8,8],
"MultipleAppUsersCount":[
{
"AppsInUse":"App1+App2",
"Count":15
},
{
"AppsInUse":"App1+App2+App3",
"Count":20
}
]
}
The benefit of using this approach is that you are not limited to predefined apps but can easily work for 0 to n number of apps. Just loop through the array and get the corresponding result.
You can even improve the above result by following as below.
{
"AppUsersCount":[
{
"AppsInUse":"App1",
"Count":10
},
{
"AppsInUse":"App2",
"Count":8
},
{
"AppsInUse":"App3",
"Count":8
},
{
"AppsInUse":"App1, App2",
"Count":15
},
{
"AppsInUse":"App1, App2, App3",
"Count":20
}
]
}
I hope this answers your question. However if you still have any other question, let me know I will be glad to answer.
Thank you for reading.
Two Requirements are needed:
Get item path of the document in a BIM360 document management.
Get all custom attributes for that item.
For Req. 1, an api exists to fetch and for getting custom attributes, another api exists and data can be retrived.
Is there a way to get both the requirements in a single api call instead of using two.
In case of large number of records, api to retrieve item path is taking more than an hour for fetching 19000+ records and token gets expired though refesh token is used, while custom attribute api processes data in batches of 50, which completes it in 5 minutes only.
Please suggest.
Batch-Get Custom Attributes is for the additional attributes of Document Management specific. While path in project is a general information with Data Management.
The Data Management API provides some endpoints in a format of command, which can ask the backend to process the data for bunch of items.
https://forge.autodesk.com/en/docs/data/v2/reference/http/ListItems/
This command will retrieve metadata for up to 50 specified items one time. It also supports the flag includePathInProject, but the usage is tricky and API document does not indicate it. In the response, it will tell the pathInProject of these items. It may save more time than iteration.
{
"jsonapi": {
"version": "1.0"
},
"data": {
"type": "commands",
"attributes": {
"extension": {
"type": "commands:autodesk.core:ListItems",
"version": "1.0.0" ,
"data":{
"includePathInProject":true
}
}
},
"relationships": {
"resources": {
"data": [
{
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:vkLfPabPTealtEYoXU6m7w"
},
{
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:bcg7gqZ6RfG4BoipBe3VEQ"
}
]
}
}
}
}
Get item path of the document in a BIM360 document management.
Is this question about getting the hiarchy of the item? e.g. rootfolder>>subfolder>>item ? With the endpoint, by specifying the query param includePathInProject=true, it will return the relative path of the item (pathInProject) in the folder structure.
https://forge.autodesk.com/en/docs/data/v2/reference/http/projects-project_id-items-item_id-GET/
"data": {
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:xxx",
"attributes": {
"displayName": "my-issue-att.png",
"createTime": "2021-03-12T04:51:01.0000000Z",
"createUserId": "xxx",
"createUserName": "Xiaodong Liang",
"lastModifiedTime": "2021-03-12T04:51:02.0000000Z",
"lastModifiedUserId": "200902260532621",
"lastModifiedUserName": "Xiaodong Liang",
"hidden": false,
"reserved": false,
"extension": {
"type": "items:autodesk.bim360:File",
"version": "1.0",
"schema": {
"href": "https://developer.api.autodesk.com/schema/v1/versions/items:autodesk.bim360:File-1.0"
},
"data": {
"sourceFileName": "my-issue-att.png"
}
},
"pathInProject": "/Project Files"
}
or if you may iterate by the data of parent
"parent": {
"data": {
"type": "folders",
"id": "urn:adsk.wipprod:fs.folder:co.sdfedf8wef"
},
"links": {
"related": {
"href": "https://developer.api.autodesk.com/data/v1/projects/b.project.id.xyz/items/urn:adsk.wipprod:dm.lineage:hC6k4hndRWaeIVhIjvHu8w/parent"
}
}
},
Get all custom attributes for that item. For Req. 1, an api exists to fetch and for getting custom attributes, another api exists and data can be retrived. Is there a way to get both the requirements in a single api call instead of using two. In case of large number of records, api to retrieve item path is taking more than an hour for fetching 19000+ records and token gets expired though refesh token is used, while custom attribute api processes data in batches of 50, which completes it in 5 minutes only. Please suggest.*
Let me try to understand the question better. Firstly, two things: Custom Attributes Definitions, and Custom Attributes Values(with the documents). Could you clarify what are they with 19000+ records?
If Custom Attributes Definitions, the API to fetch them is
https://forge.autodesk.com/en/docs/bim360/v1/reference/http/document-management-custom-attribute-definitions-GET/
It supports to set limit of each call. i.e. the max limit of one call is 200, which means you can fetch 19000+ records by 95 times, while each time calling should be quick (with my experience < 10 seconds). Totally around 15 minutes, instead of more than 1 hour..
Or at your side, each call with 200 records will take much time?
If Custom Attributes Values, the API to fetch them is
https://forge.autodesk.com/en/docs/bim360/v1/reference/http/document-management-versionsbatch-get-POST/
as you know, 50 records each time. And it seems it is pretty quick at your side with 5 minutes only if fetch the values of 19000+ records?
I have stored the csv file in blob container and try to read the content from logic app in azure. But i am facing issue to get the contect and iterate the same. Please help with flow.
You could combine the logic app with Azure Function to implement it.
Blob connector to get the file .
Pass CSV content to function and return JSON
Iterate the row values.
And about the Azure Function you could refer to this blog, in this example it has a complete Logic flow to convert csv into Json.
Hope this could help you, if you still have other questions, please let me know.
Update:
I test the function in this blog, the source code is here, blow is my test page:
And here is result page:
I copied the result to get the complete output:
{
"fileName": "MyTestCSVFile.csv",
"rows": [
{
"ID": " 1",
"Name": "Aaron",
"Score": "99"
},
{
"ID": " 2",
"Name": "Dave",
"Score": "55"
},
{
"ID": " 3",
"Name": "Susy",
"Score": "77 "
}
]
}
Intermittently and unpredictably, the Firebase Realtime Database update() function seems to work like the set() function. Anecdotally, it appears to happen on about 1% of update operations. But we've performed extensive logging, and will see things happen like a particular update is being pushed out to a group of users inside of a loop, we verify that correct information is being sent to all of them in the logs, and update() is being called on each record. However, the outcome we'll see is that sometimes one of the users will wind up with a record that only contains the fields we updated, and all other fields in the record get deleted, while all other users receive the update properly. Running the exact same update() operation subsequently will result in everything updating as expected. Is this a known issue? Are there any workarounds? We are running firebase-admin 6.0.0 on Node 8.14.0
Attempted multiple repeated tests of the update() function. There is no surefire way to cause this issue to reproduce, but it is happening randomly in production.
const contactsRef = admin.database().ref().child('contacts');
...
//targetUID, contactID, contactObj get passed in via PubSub
...
contactsRef.child(targetUID).child(contactID).update(contactObj);
Expected: update() should only update the record fields being passed to it.
Actual: update() seems to work like set() randomly, about 1% of the time. Any fields that are not included in the object being passed to update() are deleted from the target record in the Realtime Database.
It seems very unlikely that the database server behaves differently for 1% of your users. Much more likely it that there is a slight difference in the calls that that 1% of your users makes. It's hard to be certain what that difference is from the code you shared, so below is an educated guess in hopes of unblocking you quickly.
You say you do:
contactsRef.child(targetUID).child(contactID).update(contactObj);
Expected: update() should only update the record fields being passed to it.
It's a bit subtle and unfortunately you don't show how you construct contactObj. So I'll give an example. Say that you start with JSON:
"uid1": {
"name": "unknown",
"id": -1,
"full_name": "unknown",
"metadata": {
"last_seen": "20 minutes ago",
"reputation" 56
}
}
And you run this on that location:
ref.update{
"name": "miles_b",
"id": 2687721
}
In this case only the name and id properties under ref are updated. The other properties are unmodified, so you end up with:
"uid1": {
"name": "miles_b",
"id": 2687721,
"full_name": "unknown",
"metadata": {
"last_seen": "20 minutes ago",
"reputation" 56
}
}
But now say that you also want to update the metadata/reputation. You might think that this works:
ref.update{
"name": "miles_b",
"id": 2687721,
"metadata": {
"reputation": 61
}
}
But here you are telling the database to replace metadata with the object you provided. So the result is:
"uid1": {
"name": "miles_b",
"id": 2687721,
"full_name": "unknown",
"metadata": {
"reputation" 61
}
}
And this means that last_seen is now gone from the database.
To update a nested property, include its full path in the key. So:
ref.update{
"name": "miles_b",
"id": 2687721,
"metadata/reputation": 61
}
And with that, you'll keep metadata/last_seen, while updating metadata/reputation:
"uid1": {
"name": "miles_b",
"id": 2687721,
"full_name": "unknown",
"metadata": {
"last_seen": "20 minutes ago",
"reputation" 61
}
}
I have created a role (commonrole) and applied to multiple nodes.
Now I want to override one of the attributes on 1 particular node to change to a different value.
So , created 1 more role (noderole) and applied that role after "commonrole "to this node but my node does not picks the new value (-Xmx2048m as mentioned below).
Sample common role-
{
"name": "commonrole",
"description": "Manages all nodes",
"run_list": [
"recipe[abc]"
],
"default_attributes": {
"catalina_opts": [
"-Dfile.encoding=UTF-8"
]
}
Sample noderole-
{
"name": "noderole",
"description": "Manages particular node",
"run_list": [
"role[commonrole]"
],
"default_attributes": {
"catalina_opts": [
"-Dfile.encoding=UTF-8",
"-Xmx2048m"
]
}
}
Am I missing something?
Arrays in node attributes are kind of weird. I've got a full write up on my site but basically this should result in the merged value being:
[
"-Dfile.encoding=UTF-8",
"-Dfile.encoding=UTF-8",
"-Xmx2048m"
]
or something similar. Also remember you won't see the attribute change immediately in the knife node show output, only after a successful converge.