fhir patient level bulk export not working - bulk

I'm trying to perform a Patient level bulk export:
"Endpoint - All Patients Export a detailed set of FHIR resources of diverse resource types pertaining to all patients. [fhir base]/Patient/$export
"
I have a fhir server running on a smile CDR instance loaded with some basic data with the Synthea tool. I've generated 11 patients and more data related to it.
Resources loaded in the database are:
"AllergyIntolerance", "Bundle", "CarePlan", "CareTeam", "Claim", "Condition", "Coverage", "DiagnosticReport","DocumentReference", "Encounter", "ExplanationOfBenefit", "ImagingStudy", "Immunization", "Location", "Medication","MedicationAdministration", "MedicationRequest", "Observation", "Organization","Patient", "Practitioner","PractitionerRole","Procedure", "Provenance", "ServiceRequest"
When I request a resource export (Patient, Practitioner, Organization) the bulk export works:
http://localhost:8000/$export?_type=Organization
{
"resourceType": "Organization",
"id": "1633",
"meta": {
"versionId": "1",
"lastUpdated": "2021-11-12T20:42:45.627+00:00",
"source": "#HJck1YaOzVjNjBTA",
"profile": [
"http://hl7.org/fhir/us/core/StructureDefinition/us-core-organization"
]
},....
}
Now, the patient level export is generating a status job with no results at all. First I launch the bulk job with:
http://localhost:8000/Patient/$export
and then, I ask for the job status with the provided url:
http://localhost:8000/$export-poll-status?_jobId=4aaadbc9-fbe8-44e1-b631-9335fc1c2712
And the response is always the same, with no results at all (I can see in the logs that the job is completed).
{
"transactionTime": "2021-12-01T19:37:46.341+00:00",
"request": "/Patient/$export?_outputFormat=application%2Ffhir%2Bndjson"
}
By reading the documentation I think the issue is related to the bulk export permissions. In FHIR_OP_INITIATE_BULK_DATA_EXPORT, I've configured "Patient" as permission, but no matter what word I wrote there, the behavior is the same (I mean, the resource export works, but not the patient level export).
I would like to understand what I should configure on the FHIR_OP_INITIATE_BULK_DATA_EXPORT permission and on the other ones (FHIR_OP_INITIATE_BULK_DATA_EXPORT_GROUP, FHIR_OP_INITIATE_BULK_DATA_EXPORT_PATIENT, FHIR_OP_INITIATE_BULK_DATA_EXPORT_SYSTEM) to allow a user to download everything, like a super user.

Related

Custom field for test plan in jira xray

I'm tring to import results to jira Xray using Rest API cucumber/mutipart with the following curl command :
curl -H "Authorization: Bearer $token" -F info=#Exec.json -F result=#file.json https://server/rest/raven/2.0/import/execution/cucumber/multipart
As this command creates a new test execution and we cannot report results to an existing one as bug mentionned https://jira.getxray.app/browse/XRAYCLOUD-2375
So I tried to add custom field related to test plan that already created
the problem that I cannot find the exact custom field's number I get always this error
{"error":"Error assembling issue data: Field \u0027customfield_11218\u0027 cannot be set. It is not on the appropriate screen, or unknown."
Here my Exec.json:
{
"fields": {
"project": {
"key": "project"
},
"summary": "Test Execution for cucumber Execution",
"issuetype": {
"name": "Test Execution"
},
"customfield_11218" : "ODI-1103"
}
}
I get the custom field for xml file exported from a test related to test plan:
<customfield id="customfield_11218" key="com.xpandit.plugins.xray:test-plans-associated-with-test-custom-field">
<customfieldname>Test Plans associated with a Test</customfieldname>
<customfieldvalues>
<customfieldvalue>[ODI-1103]</customfieldvalue>
</customfieldvalues>
</customfield>
In the case of Cucumber JSON reports, it's currently kind of an exception. If we want to link the results to a Test Plan, then we need to use the multipart endpoint that you mentioned.. which in turn always creates a new Test Execution.
The syntax for the JSON content used to customize the Test Execution fields should be something like:
{
"fields": {
"project": {
"key": "CALC"
},
"summary": "Test Execution for Cucumber execution",
"description": "This contains test automation results",
"fixVersions": [ {"name": "v1.0"}],
"customfield_11805": ["chrome"],
"customfield_11807": ["CALC-8895"]
}
}
(you can see a code example here; that repository contains examples in other languages)
In the previous example, the Test Plan custom field is "customfield_11807". Note that the value is not a string but an array of string of the issue keys of the connected Test Plans (usually just one).
From what you shares, it seems that you are referring to another custom field which has a similar name.
You should look for a custom field named "Test Plan" that has this description "Associate Test Plans with this Test Execution" (unless someone changed it).
To find the custom field id, you can ask your Jira admin to go to Custom Fields and then edit the field named "Test Plan"... Or you can also use Jira's REST API for that :)
https://yourjiraserver/rest/api/2/customFields?search=test%20plan
This will return a JSON content where you can see some custom fields, and you'll be able to depict the one you're looking for.

How do I save lineage info in Apache Atlas when using Apache Cassandra and Elasticsearch

I am planning to deploy Apache Atlas using Apache Cassandra as a storage backend and Elasticsearch as an index backend. I am wondering how I can save lineage info with this? It provides get API to get the lineage info but seems to have no way to save it.
In Atlas lineage is created when they are linked through processes using inputs and outputs.
Example:
If you want to see a lineage between two hive_table types it would be like:
T1(hive_table)--->P1(hive_process)--->T2(hive_table)
So,basically the entities need to be linked through a process type.
In Atlas processes are entities and can be created using the API POST: /v2/entity with inputs and outputs defined in them like for above hive_process:
POST: /api/atlas/v2/entity
{
"entity": {
"typeName": "hive_process",
"attributes": {
"outputs": [
{
"guid": "2",
"typeName": "hive_table",
"uniqueAttributes": {
"qualifiedName": "t2#primary"
}
}
],
"qualifiedName": "p1#primary",
"inputs": [
{
"guid": "1",
"typeName": "hive_table",
"uniqueAttributes": {
"qualifiedName": "t1#primary"
}
}
],
"name": "P1-Process"
}
}
}
Important thing to note before creating the process is that referenced entities(inputs,outputs) should pre-exists,else process creation will fail.
If your requirement doesn't consist of pre-existing types you can of course go ahead and define your own types for Atlas Entity and Process
More about Atlas type system on Apache site

Virtal Assistant throwing 'Sorry it looks like something went wrong'

I have created a virtual assistant using the Microsoft virtual assistant template. When testing in the emulator whatever message i send i am getting a 'something went wrong reply.'
I am new to the entire bot framework ecosystem and it is becoming very difficult to proceed.
In the log what i can see is:
[11:26:32]Emulator listening on http://localhost:65233
[11:26:32]ngrok not configured (only needed when connecting to remotely hosted bots)
[11:26:32]Connecting to bots hosted remotely
[11:26:32]Edit ngrok settings
[11:26:32]POST201directline.startConversation
[11:26:39]<-messageapplication/vnd.microsoft.card.adaptive
[11:26:39]POST200conversations.replyToActivity
[11:26:54]->messagehi
[11:26:55]<-traceThe given key 'en' was not present in the dictiona...
[11:26:55]POST200conversations.replyToActivity
[11:26:55]<-trace at System.Collections.Generic.Dictionary`2.get_...
[11:26:55]POST200conversations.replyToActivity
[11:26:55]<-messageSorry, it looks like something went wrong.
[11:26:55]POST200conversations.replyToActivity
[11:26:55]POST200directline.postActivity
[11:27:48]->messagehello
[11:27:48]<-traceThe given key 'en' was not present in the dictiona...
[11:27:48]POST200conversations.replyToActivity
[11:27:48]<-trace at System.Collections.Generic.Dictionary`2.get_...
[11:27:48]POST200conversations.replyToActivity
[11:27:48]<-messageSorry, it looks like something went wrong.
[11:27:48]POST200conversations.replyToActivity
[11:27:48]POST200directline.postActivity
From what I understood the 'en' is not present in dictionary and I am not sure what is means. I checked in the Responses folder and could not see an en file not sure if that is the issue:
My emulator screenshot is attached:
Any help would be useful.
I believe the issue you are experiencing is a problem on the following lines inside MainDialog.cs:
var locale = CultureInfo.CurrentUICulture.TwoLetterISOLanguageName;
var cognitiveModels = _services.CognitiveModelSets[locale];
This tries to use the locale (retrieved from the current thread as per this documentation) as the key to access the cognitive models in your cognitivemodels.json file.
Inside your cognitivemodels.json file it should look like:
{
"cognitiveModels": {
// This line below here is what could be missing/incorrect in yours
"en": {
"dispatchModel": {
"type": "dispatch",
"region": "westus",
...
},
"knowledgebases": [
{
"id": "chitchat",
"name": "chitchat",
...
},
{
"id": "faq",
"name": "faq",
...
},
],
"languageModels": [
{
"id": "general",
"name": "msag-test-va-boten_general",
"region": "westus",
...
}
]
}
},
"defaultLocale": "en-us"
}
The en key insides the cognitiveModels object is what the code is trying to use to retrieve your cognitive models, thus if the locale pulled out in the code doesn't match the locale keys in your cognitivemodels.json then you will get the dictionary key error.
EDIT
The issue the OP has was a failed deploy. The steps we took were to:
Checked the deploy_log.txt inside the Deployment folder for errors.
If this case it was empty - not a good sign.
Checked the deploy_cognitive_models_log.txt inside the Deployment folder for errors.
There was an error present Error: Cannot find module 'C:\Users\dip_chatterjee\AppData\Roaming\npm\node_modules\botdispatch\bin\dispatch.js.
To fix this error we reinstalled all of the required npm packages as per step 5 of this guide then ran the deploy script as per this guide.

RESTful API design - naming an "activity" resource

When designing the endpoints for an activity resource that provides information on the activity of other resources such as users and organisations we are struggling with naming conventions.
What would be more semantic:
/organisations/activity
/organisations/activity/${activityId}
/users/activity
/users/activity/${activityId}
OR
/activity/users/${activityId}
/activity/users
/activity/organisations/${activityId}
/activity/organisations
There's not a generic answer for this, especially since the mechanisms doing the lookup/retrieval at the other end, and associated back-ends vary so drastically, not to mention the use case purpose and intended application.
That said, assuming for all intents and purposes the "schema" (or ... endpoint convention from the point of view of the end user) was just going to be flat, I have seen many more of the latter activity convention, as that is the actual resource, which is what many applications and APIs are developed around.
I've come to expect the following style of representation from APIs today (how they achieve the referencings and mappings is a different story, but from the point of view of API reference)
-
{
"Activity": [
{
"date": "1970-01-01 08:00:00",
"some_other_resource_reference_uuid": "f1c4a41e-1639-4e35-ba98-e7b169d1c92d",
"user": "b3ababc4-461b-404a-a1a2-83b4ca8c097f",
"uuid": "0ccf1b41-aecf-45f9-a963-178128096c97"
}
],
"Users": [
{
"email": "johnanderson#mycompany.net",
"first": "John",
"last": "Anderson",
"user_preference_1": "somevalue",
"user_property_1": "somevalue",
"uuid": "b3ababc4-461b-404a-a1a2-83b4ca8c097f"
}
]
}
The StackExchange API allows retrieving objects through multiple methods also:
For example, the User type look like this:
-
{
"view_count": 1000,
"user_type": "registered",
"user_id": 9999,
"link": "http://example.stackexchange.com/users/1/example-user",
"profile_image": "https://www.gravatar.com/avatar/a007be5a61f6aa8f3e85ae2fc18dd66e?d=identicon&r=PG",
"display_name": "Example User"
}
And on the Question type, the same user is shown underneath the owner object :
-
{
"owner": {
"user_id": 9999,
"user_type": "registered",
"profile_image": "https://www.gravatar.com/avatar/a007be5a61f6aa8f3e85ae2fc18dd66e?d=identicon&r=PG",
"display_name": "Example User",
"link": "https://example.stackexchange.com/users/1/example-user"
},
"is_answered": false,
"view_count": 31415,
"favorite_count": 1,
"down_vote_count": 2,
"up_vote_count": 3,
"answer_count": 0,
"score": 1,
"last_activity_date": 1494871135,
"creation_date": 1494827935,
"last_edit_date": 1494896335,
"question_id": 1234,
"link": "https://example.stackexchange.com/questions/1234/an-example-post-title",
"title": "An example post title",
"body": "An example post body"
}
On the Posts Type reference (Using this as a separate example because there is only a handful of methods to reach this type), you'll see an example down the bottom :
Methods That Return This Type
  posts
  posts/{ids}
  users/{ids}/posts 2.2
  me/posts 2.2
So whilst you can access resources (or "types" as it is on StackExchange), through a number of ways including filters and complex queries, there still exists the ability to see the desired resource through a number of more direct transparent URI conventions.
Different applications will clearly have different requirements. For example, the Gmail API is user based all the way - this makes sense from a users point of view given that in the context of the authenticated credential, you're separating one users objects from another.
This doesn't mean google uses the same convention for all of their APIs, their Activities API resource is all about the activity
Even looking at the Twitter API, there is a Direct Messages endpoint resource that has sender and receiver objects within.
I've not seen many API's at all that are limited to accessing resources purely via a user endpoint, unless the situation obviously calls for it, i.e. the Gmail example above.
Regardless of how flexible a REST API can be, the minimum I have come to expect is that some kind of Activity, location, physical object, or other entity is usually it's own resource, and the user association is plugged in and referenced at various degrees of flexibility (at a minimum, the example given at the top of this post).
It should be pointed out that in a true REST api the uri holds no meaning. It's the link relationships from your organizations and users resources that matter.
Clients should just discover those urls, and should also adapt to the new situation if you decide that you want a different url structure after all.
That being said, it's nice to have a logical structure for this type of thing. However, either is fine. You're asking for an opinion, there is not really a standard or best practice. That said, I would choose option #1.

Azure DSC error on initial sync generating mof

I have a custom DSC module which is class based. During the initial sync process the target machine tried to generate a MOF in C:\Windows\System32\dsc which results in an error - this causes the initial sync to report as failed, even though all the individual configuration resource tasks show as succeeded. The ones that are based on the resource who's MOF was not generated report as succeeded, but in fact have not executed at all.
This is the error:
{
"JobId": "4deeaf52-aa56-11e6-a940-000d3ad04eaa",
"OperationType": "Initial",
"ReportFormatVersion": "2.0",
"ConfigurationVersion": "2.0.0",
"StartTime": "2016-11-14T21:37:14.2770000+11:00",
"Errors": [{
"ErrorSource": "DSCPowershellResource",
"Locale": "en-US",
"Errors": {
"Exception": {
"Message": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"Data": {
},
"InnerException": null,
"TargetSite": null,
"StackTrace": null,
"HelpLink": null,
"Source": null,
"HResult": -2146233079
},
"TargetObject": null,
"CategoryInfo": {
"Category": 6,
"Activity": "",
"Reason": "InvalidOperationException",
"TargetName": "",
"TargetType": ""
},
"FullyQualifiedErrorId": "ProviderSchemaNotFound",
"ErrorDetails": null,
"InvocationInfo": null,
"ScriptStackTrace": null,
"PipelineIterationInfo": []
},
"ErrorCode": "6",
"ErrorMessage": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"ResourceId": "[tAzureStorageFileSync]CDrive"
}],
"StatusData": [],
"AdditionalData": [{
"Key": "OSVersion",
"Value": {
"VersionString": "MicrosoftWindowsNT10.0.14393.0",
"ServicePack": "",
"Platform": "Win32NT"
}
},
{
"Key": "PSVersion",
"Value": {
"CLRVersion": "4.0.30319.42000",
"PSVersion": "5.1.14393.206",
"BuildVersion": "10.0.14393.206"
}
}]
}
I have tried manually generating the MOF and including it in the module, but that didn't help (or perhaps I did it wrong). Even though this is a class-based resource I added the MOF with the name of the class in a \DSCResources\ className \ classname .schema.mof file. I note that the one generated in the C:\windows\system32\dsc folder includes the version number, which mine does not. Perhaps that's the problem.
After the failed initial sync, the subsequent consistency check does pass, and the MOF is created at the location mentioned in the error message.
The class itself contains a function that calls Import-Module Azure.Storage which is installed on the machine by a different DSC resource, and has been installed at the point of the consistency check, but (obviously) not at the point the initial sync starts. The resource that installs the module is marked as a dependency of the class-resource in the configuration, but I think MOF generation must happen at the point the modules are deployed which is logically before the initial sync has run.
At least that's what I think is happening.
Would be grateful if anyone could instruct me on what can be done in this instance, and whether my assumptions (above) are correct? I can't seem to get any additional errors or telemetry from the MOF compilation process itself to see why the MOF compilation is failing.
#Mods I'm really not clear on what basis this would be downvoted - I don't think asking a question nobody can answer is really grounds for "punishment".
Posting an answer as nobody really had anything to contribute here and I appear to
have solved it on my own. I believe the issue is a matter of timing. The DSC dependent modules are delivered from the pull server and compiled before any of them are executed. The dependency of my class module on Azure.Storage meant that the .psm1 file couldn't be compiled (since the module didn't exist on the machine yet - it would be devlivered via a DSC resource at a later time).
Perhaps there is some mechanism that accounts for these dependencies in PS-based modules, or there is some leniency applied that isn't the case for class-based resources. That's still not clear.
After some experimentation I have begun creating and shipping the MOF files alongside the psm1 and psd1 file, rather than in the DSCResources... child folder as outlined in my question, and this appears to have resolved the issue.
Hopefully this helps someone and doesn't attract more downvotes.

Resources