Azure API for FHIR Create New Resource with Generate Id - azure

I am using the Azure API for FHIR service, and I would like to generate a resource with an Id I create on my side. Looking at the docs, it looks like you are supposed to be able to do a PUT request to /, but it doesn't seem to be working. If I do a POST to / and specify my Id, it gets ignored.
In the future, I want it to be able to generate the Ids, but for the "Importing Legacy Data" phase, I want to be able to specify my own Ids to make the linking side of things easier.
Any ideas?
Update
The problem I am trying to avoid is having to push all the data to the FHIR Endpoint and then go over everything again to create the links.

If the server allows an upsert, you would do the PUT to '/[ResourceType]/[id]', not to just '/'.
For example, if you have a Patient with technical id '123', you could do:
PUT [base]/Patient/123
Make sure the Patient resource has its 'id' field set to '123' inside the resource as well. It has to match the id you put on the url.
Another option would be to construct a transaction Bundle, where you can make entries for each resource you want to upsert. You would use the PUT verb and '[ResourceType]/[id]' again for the entry.request details. After constructing the Bundle, you can send it with a POST to the base, so the server knows to process this as a transaction.
Transaction Bundles are also very useful if you have a bunch of resources that are related and reference each other, since the server is required to update all references for new resources inside the Bundle. See http://hl7.org/fhir/http.html#trules for more information about that. Note that not all servers support transactions, just like not all servers allow upserts as Lloyd mentions in his answer.

Servers are not required to allow external systems to assign ids. Many servers won't support 'upsert' as it risks id collisions. Those that do will typically only enable it in certain circumstances. E.g. all ids are required to be UUIDs, so collisions aren't possible; there's only one source and it's synchronizing with the target system, so it 'owns' the id and there can't be collisions.
Some servers may have a configuration ability to turn on 'upsert' capability if it's been determined to be safe to do.

Related

REST URI's and caching for GET requests when the list to be returned depends on the rights of the user

It is a multi-tenant serverless system.
The system has groups with permissions.
Users derive permissions based on the groups they are in.
If it makes a difference, we are using Cognito for authentication and it is a stateless application.
For example:
GET endpoint for sites (so sites that the logged-in user has access to based on the groups they are in)
GET endpoint for devices (so sites that the logged-in user has access to based on the groups they are in)
In REST APIs. "The idea is that the data returned by an endpoint should depend solely on the parameters passed meaning two different users should receive the same result for the identical request.
"
What should the REST URI look like to ensure the above-stated idea? Since the deciding factor for the list here is "groups" and thus effective permissions, I was thinking we could pass the groups a user in, in the URI in sorted order to leverage caching on GET endpoints as well, Is there a better way to do it?
In REST APIs. "The idea is that the data returned by an endpoint should depend solely on the parameters passed meaning two different users should receive the same result for the identical request. "
No this is not strictly true. It can be a desirable property, but absolutely not needed. In fact, if you build a proper hypermedia REST api, you would likely want to hide links/actions that the current user is not allowed to use.
Furthermore, a cache will never store responses and send to different users if an AUthorization header is present on the request.
Anyway, there could be other reasons to want this.. maybe it's a simpler design for your case, and there is a pretty reasonable solution.
What I'm inferring from your question is that you might have two endpoints:
/sites
/devices
They return different things depending on who's accessing. Instead of using those kind of routes, you could just do:
/user/1234/sites
/user/1234/devices
Now every user has their own separate 'sites' and 'devices' collection. The additional benefit is that if you ever want to let a user find the list of sites or devices from another user, the API is ready to support that.
The idea is that the data returned by an endpoint should depend solely
on the parameters passed
This is called the statelessness constraint, but if you check the parameters always include auth parameters because of this. The idea is keeping the session data on the client side, because managing sessions becomes a problem when you have several million users and multiple servers all around the world. Since the parameters include auth data, the response can depend on this data, so you can use here the exact same endpoints for users with different permissions.
As of the responses you might want to send back hyperlinks, which represent the available operations. The concept is the same here, if the user does not have permission for the actual operation, then they won't get a hyperlink for that operation and in theory they should never get a 403 status either, because you must follow the hyperlinks you got from the service instead of hardcoding URI templates into your client. So you have to handle less errors and junk requests, and another reason here that you can change your URI templates without breaking the clients. This is called hypermedia as the engine of application state, it is part of the uniform interface constraint.

CQRS with REST APIs

I am building a REST service over CQRS using EventSourcing to distribute changes to my domain across services. I have the REST service up and running, with a POST endpoint for creating the initial model and then a series of PATCH endpoints to change the model. Each end-point has a command associated with it that the client sends as a Content-Type parameter. For example, Content-Type=application/json;domain-command=create-project. I have the following end-points for creating a Project record on my task/project management service.
api.foo.com/project
Verb: POST
Command: create-project
What it does: Inserts a new model in the event store with some default values set
api.foo.com/project/{projectId}
Verb: PATCH
Command: rename-project
What it does: Inserts a project-renamed event into the event store with the new project name.
api.foo.com/project/{projectId}
Verb: PATCH
Command: reschedule-project
What it does: Inserts a project-rescheduled event into the event store with the new project due date.
api.foo.com/project/{projectId}
Verb: PATCH
Command: set-project-status
What it does: Inserts a project-status-changed event into the event store with the new project status (Active, Planning, Archived etc).
api.foo.com/project/{projectId}
Verb: DELETE
Command: delete-project
What it does: Inserts a project-deleted event into the event store
Traditionally in a REST service you would offer a PUT endpoint so the record could be replaced. I'm not sure how that works in the event-sourcing + CQRS pattern. Would I only ever use POST and PATCH verbs?
I was concerned I was to granular and that every field didn't need a command associated with it. A PUT endpoint could be used to replace pieces. My concern though was that the event store would get out of sync so I just stuck with PATCH endpoints. Is this level of granularity typical? For a model with 6 properties on it I have 5 commands to adjust the properties of the model.
This is a common question that we get a lot of the time when helping developers getting started with CQRS/ES. We need to acknowledge that applying REST in a pure way is a really bad match for DDD/CQRS since the intention of the commands are not explicitly expressed in the verbs GET/POST/PUT/PATCH/DELETE (even though you can use content-type like you did). Also the C/R-side of the system are definitely different resources in a CQRS-system which does not match up with REST.
However, to use HTTP to provide an API for a CQRS/ES system is very practical.
We usually only use POST for sending commands, to either a /commands endpoint or to endpoints with the name of the command, i.e /commands/create-project. It's all about how strict you want to be. In this case we embed the command type in the payload or as a content-type.
However, it is all a matter of what matches the tech stack better and what you choose here usually does not make or break the solution. The more important part is usually to create a good domain model and get the whole team onboard with this way of thinking.
Good luck!
One question that comes to mind is, is REST the right paradigm for CQRS at all?
One completely different way to structure this is to not have action-focused endpoints, but instead structure your REST API as a series of events that you add new events to (with POST).
Events should be immutable and append-only, so maybe a DELETE method doesn't make that much sense for mutations.
If you're going all in with CQRS (good luck, I've heard the war stories) I would be inclined to build an API that reflects that model well.
Would I only ever use POST and PATCH verbs?
Most of the time, you would use POST.
PUT, and PATCH are defined with remote authoring semantics - they are methods used to copy new representations of a resource from the client to the server. For example, the client GETs a representation of /project/12345, makes local edits, and then uses PUT to request that the server accept the client's new representation of the resource as its own.
PATCH, semantically, is a similar exchange of messages - the difference being that instead of sending the full representation of the resource, the client returns a "patch-document" that the server can apply to its copy to make the changes.
Now, technically, the PATCH documentation does put any restrictions on what a "patch-document" is. In order for PATCH to be more useful that POST, however, we need patch document formats that are general purpose and widely recognized (for instance, application/merge-patch+json or application/json-patch+json).
And that's not really the use case you have here, where you are defining command messages that are specific to your domain.
Furthermore, remote authoring semantics don't align very well with "domain modeling" (which is part of the heritage of CQRS). When we're modeling a domain, we normally give the domain model the authority to decide how to integrate new information with what the server already knows. PUT and PATCH semantics are more like what you would use to write information into an anemic data store.
On the other hand, it is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” -- Fielding, 2009
It may help to recall that REST is the architectural style of the world wide web, and the only unsafe method supported by html is POST.
So replacing your PATCH commands with POST, and you're on the right path.
Fielding, 2008
I should also note that the above is not yet fully RESTful, at least how I use the term. All I have done is described the service interfaces, which is no more than any RPC. In order to make it RESTful, I would need to add hypertext to introduce and define the service, describe how to perform the mapping using forms and/or link templates, and provide code to combine the visualizations in useful ways. I could even go further and define these relationships as a standard, much like Atom has standardized a normal set of HTTP relationships with expected semantics
The same holds here - we aren't yet at "REST", but we have improved things by choosing standardized methods that are better aligned with our intended semantics.
One final note -- you should probably replace your use of DELETE with POST as well. DELETE is potentially a problem for two reasons -- the semantics aren't what you want, and the standard delete payload has no defined semantics
Expressed another way: DELETE is from the transferring documents over a network domain, not from your domain. A DELETE message sent to your resources should be understood to mean the same thing as a DELETE message sent to any other resource is understood. That's the uniform interface constraint at work: we all agree that the HTTP method tokens mean the same thing everywhere.
Relatively few resources allow the DELETE method -- its primary use is for remote authoring environments, where the user has some direction regarding its effect -- RFC 7231
As before: remote authoring semantics are not obviously a good fit for sending messages to a domain model.
This Google Cloud article API design: Understanding gRPC, OpenAPI and REST and when to use them clarifies the REST vs RPC debate. REST is more relevant for entity-centric API whereas RPC is more relevant for action-centric API (and CQRS). The most mature REST level 3 with hypermedia controls works well only for entities with simple state models.
Understand and evaluate first the benefits of REST for your case. Many APIs are REST-ish and not RESTful. OpenAPI is actually RPC mapped over and HTTP endpoints but it doesn't prevent it to be widely adopted.

How to restrict user to access only his group elements in Loopback?

I was trying to find it in docs or anywhere on the web but I did not find.
What I am asking about?
I am building website for multiple users. Frontend is not important, back backend API is being build in Loopback.
Every user will be assigned to some, let's name it GROUP.
Group content will be then exposed on subdomain but it is not important now.
Users will be kind of admins of their group.
I will have plenty of different models, but I will always have to protect user from accessing elements which not belongs to his group.
How should I do it? I think it will be some middleware but I do not know how to do it properly.
Of course, every user and every element have field "group_id".
I am also trying to find a good solution... I did find this npm package that looks worth a try: https://www.npmjs.com/package/loopback-component-access-groups
Here is a short description of what the package is used for:
"This loopback component enables you to add multi-tenant style access controls to a loopback application. It enables you to restrict access to model data based on a user's roles within a specific context."
I'm struggeling with the same problem, and I did not yet find a satisfatory response.
My workaround is explained in this question. I've got my user ID and with this, I retrieve the data I need to restrict the access. Then I alter the query in accordance with fetched data.

Protect remote resources when served with nodejs

This is more of an architecture question involving nodejs as implementation.
I have on a folder not exposed by the webserver files that I want to offer to the user.
The way nodejs should expose the resource to the end user is via a one shot link, that once is consumed is no longer available.
The user through the entire experience should never know the real location of the file.
I'm sure this is a common architecture pattern, but I have never implemented something similar.
Looking at scalability, the resource shouldn't be copy either on HD or RAM, and if possible the solution should not relay on a DB token tracking system.
I don't necessary need a code implementation, but a detail explanation on how I should implement it
Thank you so much
Give user a cookie
Create a temporary association (in db) between cookie and a generated ID for the user (or the hash of it, if you want to be fancy)
Give user the ID
When user requests resource by ID:
Test to see if the ID (or its hash, if you want to be fancy) is in the DB
If it is, give the user the resource and destroy the association between the user and the resource ID
There's a db token tracking system. Hey, that's the only way.
One way to avoid depending on a DB, would be to maybe create a symbolic link in the filesystem (based on the token), that would be removed after a request for it. Would not work satisfactory on windows though.
Example (psuedo):
Create token (guid, or similar)
symlink guid -> actual file
once request is completed, remove symlink
However, I don't think there is a reliable way of knowing if the file was successfully downloaded, so you better prepare for that. Some sort of pingback when the file was completely downloaded is probably the most reliable way that I can think of right now.
For scalability, make sure that the symlink is on a shared file system. Clustered node.js instances on the same server, will be fine though.
If this needs to be restricted to an authenticated user, you could combine the guid with your auth token, and prepend/append it before looking for a file.

parse.com security

Recently I discovered how useful and easy parse.com is.
It really speeds up the development and gives you an off-the-shelf database to store all the data coming from your web/mobile app.
But how secure is it? From what I understand, you have to embed your app private key in the code, thus granting access to the data.
But what if someone is able to recover the key from your app? I tried it myself. It took me 5 minutes to find the private key from a standard APK, and there is also the possibility to build a web app with the private key hard-coded in your javascript source where pretty much anyone can see it.
The only way to secure the data I've found are ACLs (https://www.parse.com/docs/data), but this still means that anyone may be able to tamper with writable data.
Can anyone enlighten me, please?
As with any backend server, you have to guard against potentially malicious clients.
Parse has several levels of security to help you with that.
The first step is ACLs, as you said. You can also change permissions in the Data Browser to disable unauthorized clients from making new classes or adding rows or columns to existing classes.
If that level of security doesn't satisfy you, you can proxy your data access through Cloud Functions. This is like creating a virtual application server to provide a layer of access control between your clients and your backend data store.
I've taken the following approach in the case where I just needed to expose a small view of the user data to a web app.
a. Create a secondary object which contains a subset of the secure objects fields.
b. Using ACLs, make the secure object only accessible from an appropriate login
c. Make the secondary object public read
d. Write a trigger to keep the secondary object synchronised with updates to the primary.
I also use cloud functions most of the time but this technique is useful when you need some flexibility and may be simpler than cloud functions if the secondary object is a view over multiple secure objects.
What I did was the following.
Restrict read/write for public for all classes. The only way to access the class data would be through the cloud code.
Verify that the user is a logged in user using the parameter request.user ,and if the user session is null and if the object id is legit.
When the user is verified then I would allow the data to be retrieved using the master key.
Just keep a tight control on your Global Level Security options (client class creation, etc...), Class Level Security options (you can for instance, disable clients deleting _Installation entries. It's also common to disable user field creation for all classes.), and most important of all, look out for the ACLs.
Usually I use beforeSave triggers to make sure the ACLs are always correct. So, for instance, _User objects are where the recovery email is located. We don't want other users to be able to see each other's recovery emails, so all objects in the _User class must have read and write set to the user only (with public read false and public write false).
This way only the user itself can tamper with their own row. Other users won't even notice this row exists in your database.
One way to limit this further in some situations, is to use cloud functions. Let's say one user can send a message to another user. You may implement this as a new class Message, with the content of the message, and pointers to the user who sent the message and to the user who will receive the message.
Since the user who sent the message must be able to cancel it, and since the user who received the message must be able to receive it, both need to be able to read this row (so the ACL must have read permissions for both of them). However, we don't want either of them to tamper with the contents of the message.
So you have two alternatives: either you create a beforeSave trigger that checks if the modifications the users are trying to make to this row are valid before committing them, or you set the ACL of the message so that nobody has write permissions, and you create cloud functions that validates the user, and then modifies the message using the master key.
Point is, you have to make these considerations for every part of your application. As far as I know, there's no way around this.

Resources