How to perform some tasks with an incoming request before saving it or responding with django rest framework? - python-3.x

I'm quite new to django, i'm trying to migrate and old API built in express to Django Rest Framework,
brief story:
The API is meant to receive different kind of payplods from different device, in example
{
"device": "device001",
"deviceType": "temperature_device",
"deviceTs": timestamp,
"payload": {
"airTemp": X,
"airHum": Y,
}
}
the payload wouldn't be always the same, so other devices (different type) will bring different key - value pairs in the "payload" field.
I'm using Django Rest Framework, alongside model serializers and and GenericViewSet,
but the problem is that before storing the data to the DB and returning the HTTP Response, I need to perform a data validation (minimum, and maximum values) and in some cases, the device sends some "corrupted" data (In example: Negatives number comes with the following syntax: 1.-5 instead of -1.5), I need to fix these values and so on, finally, I need to perform two HTTP request to an external API with the fixed payload and and API Key (that should be stored in the device details model in my database)
so, in short how can I perform any kind of -previous work- to a request BEFORE storing the data into the DB and returning the HTTP response?

You will receive your payload in request.data then you will have to serialize it and validate your payload according to your requirements.
Here is the DRF serilization document which would help you understand how exactly serialization works.
And here is the DRF Validators documents to understand how validators work.

Related

Is it okay if the command model(microservice) knows the resource url of the query model in 201 created response? (CQRS)

I divided my application into the command model and query model.
When the command is executed on the command model, the event is published, and then the query model creates its own data and persists. (it occurs in the same transaction.)
When the user sends data with Post method, the command model has to return created 201.
My question is that is it okay the command model knows about the query model's resource URL?
(is it okay for the command model's controller to be coupled with the query model?)
For example)
Request
Post /articles
body: { title: "the title", body: "the body"}
Response
201 Created
Location: /subscription/news
the UI only reads data from the query model and the query model has some different URL patterns compared to the command model, and they only provide news as a collection.
Is the above example make sense? What do you think?
Putting a reference to the query service's (GET) request in the HTTP response of the command service's (POST) reponse does not imply both services are coupled. Only information about where to find the freshly created resource is stored in the header but the services and their functionality remain separated.
If you want to know more about automatically creating the URL's (I assume you mean hard coded URL by coupling between services) instead of hardcoding them, you could also take a look at HATEOAS...

How to retrieve all user tokens and send FCM to all

I am trying to send a FCM to multiple users at a time. I can't seem to find a clear answer other than using a topic but i have several conditions which make this quite difficult on my side. Is it at all possible to extract all tokens from a particular document (similar to setting it up as a topic but based on my logic) and pass that array to the Cloud function?
My current code is as follows and it works for a single user (i am only posting the part where the token is extracted):
return admin.firestore().doc('Seller_tokens/tokens/' + brand + '/wc').get().then(usertokensdoc =>{
const sellertokenID = usertokensdoc.get("dXufWMvOTLXUOyj8XNv9NFmsQ4x1");
const payload = {
data:{
title:'FCM - Test',
content: 'FCM - This is a test message,
},
token: sellertokenID
};
for the line:
const sellertokenID = usertokensdoc.get("dXufWMvOTLXUOyj8XNv9NFmsQ4x1");
I do not want to specify a single userid, where the 'userid' is the field in the document . And hence I would like to pass an array, if possible.
As you mentioned and as clarified in the official documentation Send messages to multiple devices, there are only two ways of sending messages to multiple tokens:
Firebase Cloud Messaging provides these two ways to target a message to multiple devices:
Topic messaging, which allows you to send a message to multiple devices that have opted in to a particular topic.
Device group messaging, which allows you to send a message to multiple devices that belong to a group you define.
In addition to this, as mentioned in this other question here, there is no API for you to get all tokens at once, so, you will need to get the individually for usage.
So, to summarize, there is not an automatic way of getting the tokens, but once you get them, I believe using the Device Messaging way would be better for you, since you can define specific groups to receive the FCM, per tokens that you retrieved.
Let me know if the information helped you!
you can retrieve the token documents based on your conditions in cloud function. loop through them, create an array of tokens and pass them to Firebase messaging as below
admin.messaging.sendToDevice(tokens, payload). below url has a very good example.
FCM - Cloud Functions Example

What is the proper architecture for requesting a document vs. document details in a REST API?

I'm designing a REST API using NodeJS and Express for managing different types of documents (txt, pdf, doc, etc). The document model is something like:
{
id,
category,
name,
path,
tags,
etc..
}
I currently have a route to GET a single document.
/documents/:id
This route serves up the actual document. I would love some guidance on the proper way to serve up the document details (i.e. the name, category, etc). Should I use a different URL? Send details in headers? Use response.format? Make the client specify via query?
Edit: I should clarify that the documents will be stored on the server filesystem and will not be directly accessible by any client.
If the document details are coming from a different source, the most straightforward way is to set up a different endpoint, /document-details/:id.
Otherwise, if a document can be stringified into a JSON, you could nest both in the same response:
{
document: {},
details: {
category,
name,
path,
tags
}
}
I would recommend against sending any detail about document in the HTTP header, those are usually used to send the details about the response itself.

How do I save and retrieve information across invocations of my agent in Dialogflow?

I would like my Actions on Google agent to store and retrieve certain pieces of information across invocations - like a cookie. How do I do this?
You have a lot of options on how you want to do this, depending on exactly what you're trying to do. It isn't exactly like a web cookie, although there are similarities.
If you want the equivalent of a session cookie, information that is retained during a single conversation, then your options are
Using the Session ID provided as part of the information sent to you on each invocation and tracking this in your fulfillment.
Storing information you want retained using a Dialogflow context
If you are using the actions-on-google JavaScript library, storing this in the app.data object created for you.
If you want the equivalent of a long-lasting cookie to retain information between conversations then your options are
Using the anonymous User ID provided as part of the information sent to you on each invocation and tracking this in your fulfillment.
If you are using the actions-on-google javascript library, storing this in the app.userStorage object created for you.
Storing it as part of the string in the JSON response under data.google.userStorage.
Some more information about each of these
Session ID
A different Session ID is created for each conversation you have. You can get this Session ID by examining the JSON sent to your webhook in the sessionId parameter.
You can then look this up in a data store of some sort that you manage.
Dialogflow context
Contexts are powerful tools that are available with Dialogflow. You return a context as part of your fulfillment webhook and indicate the name of the context, its lifetime (how many more rounds of the conversation it will be passed back to your webhook), and any parameters associated with the context (string key/value pairs).
Contexts are especially useful in helping determine what intents may be called. You can indicate what contexts must be active for an Intent to be recognized by Dialogflow.
If you're using the actions-on-google node.js library, you can set a context using something like this:
var contextParameters = {
foo: "Something foothy",
bar: "Your local bar."
};
app.setContext( "remember_this", 5, contextParameters );
You need to do this before you call app.ask() or app.tell().
Or you can do the equivalent in the JSON as part of the contextOut block of the response
"contextOut": [
{
"name": "remember_this",
"lifespan": 5,
"parameters": {
"foo": "Something foothy",
"bar": "Your local bar."
}
}
]
The next time your webhook is called, you can fetch this context either by looking at the result.contexts array or by using the app.getContext() or app.getContextArgument() methods in the library.
Using app.data
If you're using the library, Google has done some of the work for you. The app.data object is created for you. Any values you set in the object are available for the lifetime of the session - you just read them in later calls to your webhook.
(Under the covers, Google uses a context for this, so there is no magic. The two work together and you're free to do both.)
Anonymous UserID
When a user first uses your action, a user ID is generated. This ID doesn't give you access to any specific information about them, and isn't used for any other action, but every time you see it, you can be assured that it was the same user that used it on a previous occurrence. Just like a cookie, however, the user can reset it and a new ID will be generated for them for your action.
You get this from the JSON at originalRequest.user.userId or by using app.getUser().userId. Once you have it, you'd use a data store of some sort to store and retrieve information about this user.
Using app.userStorage
Similar to app.data, there is also an app.userStorage object that is created for you for each user. Any changes you make to this object are saved in between conversations you have with this user.
Unlike app.data, however, this doesn't get stored in a context. It has its own storage method. Which leads to...
Storing it in JSON
If you're not using the actions-on-google library, you still have access to userStorage through the response and request JSON directly. You need to store this as a string, but if you need to store a more complex object, a common method is to stringify it as JSON.
You'll store this value under data.google.userStorage in the response and can retrieve it under originalRequest.data.user.userStorage in the request your webhook receives.
You can save the information in Context with a key value parameter.
SAVING VALUES IN CONTEXT :
agent.set.Context({
name:'context-name',
lifespan: 5,
parameters:{
'parameter-name':'parameter-value'
}
});
GETTING VALUES FROM CONTEXT
agent.getContext('context-name');
For more Details : https://dialogflow.com/docs/contexts/contexts-fulfillment
You could also use a Google Cloud database like BigQuery or Firestore
Sounds like you may want to checkout out Account Linking: https://developers.google.com/actions/identity/account-linking. With account linking you can collect end-user information which you exchange with Google by providing a unique key. This unique key becomes part of every request you receive from Google, so when you get that unique key you lookup the information you collected from the end-user. In your case, you would store credentials or whatever key is required to access the end-user information. After the initial linking, any new data you obtain could be stored along with the original information collected, based on the unique key obtained during account linking.
For this purpose, i just did a node module just for that, in external json file from api call, i need to store and add additional informations to retrieve later. I thing that you can do a lot with this module, Store object, array, json, value, Navigation history?, back to previous page.
It work like localStorage or Cookies.
There's no limit, you can create multiple storage by name (key) an value. It's new and i'm testing it for bugs right now on my own project.
Test on Runkit
On npm
vStorage = require('virtual-storage');
vStorage.set('name', '{title:'Title 1', description:'Descriptions 1'}')
let getStorage_name = vStorage.get('name');
console.log(getStorage_name.title);
vStorage.get('name')

CouchDB: Restricting users to only replicating their own documents

I'm having trouble finding documentation on the request object argument used in replication filters ('req' in the sample below):
function(doc, req) {
// what is inside req???
return false;
}
This old CouchBase blog post has a little code snippet that shows the userCtx variable being a part of the request object:
What is this userCtx? When you make an authenticated request against
CouchDB, either using HTTP basic auth, secure cookie auth or OAuth,
CouchDB will verify the user’s credentials. If they match a CouchDB
user, it populates the req.userCtx object with information about the
user.
This userCtx object is extremely useful for restricting replication of documents to the owner of the document. Check out this example:
function(doc, req) {
// require a valid request user that owns the current doc
if (!req.userCtx.name) {
throw("Unauthorized!");
}
if(req.userCtx.name == doc.owner) {
return true;
}
return false;
}
But the problem now is that CouchDB requires the filter method to be explicitly chosen by the initiator of the replication (in this case, the initiator is a mobile user of my web app):
curl -X POST http://127.0.0.1:5984/_replicate \
-d '{"source":"database", \
"target":"http://example.com:5984/database", \
"filter":"example/filtername"
}'
The Question
Is there a way to enforce a specific filter by default so that users are restricted to replicating only their own data? I'm thinking the best way to do this is to use a front end to CouchDB, like Nginx, and restrict all replication requests to ones that include that filter. Thoughts? Would love a way to do this without another layer in front of CouchDB.
Data replication stands right with user ability to read data. Since if your users shares data within single database all of them has right to replicate all of them to their local couches. So you couldn't apply any documents read restriction unless you've split single shared database into several personal ones - this is common use case for such situations.
There is no any way to enforce apply changes feed filter or other parameters like views has. However, you can use rewrites to wraps requests to some resources with predefined query parameters or even with dynamic ones. This is a little not solution that you'd expected, but still better that nginx and some logic at his side: probably, you'd to allow users to specify custom filters with custom query parameters and enforce you're own only if nothing specified, right?
P.S. Inside req object is very useful about current request. Partially it was described at wiki, but it's a little out of date. However, it's easily to view it with simple show function:
function(doc, req){
return {json: req}
}

Resources