What are all the ways CouchDB reponses fail? - node.js

I'm building a Node.js application on the express.js framework with CouchDB as a database. I'm utilizing CouchDB's session api for maintaining session state, and various databases for different sections of data.
On essentially every request my application code makes a request to Couch and then if there's an error (with Node) I can respond appropriately, by logging the error and redirecting to a 404 page or something like that. But if I get a CouchDB error, Node wouldn't consider it an error, it would consider that data. Now that's totally fine with me as long as CouchDB can only return this format:
{
"error": "illegal_database_name",
"reason": "Only lowercase characters (a-z), digits (0-9), and any of the characters _, $, (, ), +, -, and / are allowed. Must begin with a letter."
}
A JSON doc with two properties, error and reason. That's fine I can parse it and return the appropriate message; quite gracefully actually.
BUT! Is that all I can expect from CouchDB, or is there another way Couch might fail, that wouldn't yield a JSON doc with those two fields (properties)?

dscape's information of relying on the response codes is correct, and in most situations you will get an object with error and reason. The bulk-document errors are the only place I can think of where neither of these will be true. If just one document fails then you'll still get a 200, but you'll get the error/reason within the array element corresponding to the document that failed. See the docs for more info on that.

Related

Consecutive calls to updateOne of mongodb: 3rd one does not work

I receive 3 post calls from client, let say in a second, and with nodejs-mongodb immediately(without any pause, sleep, etc) I try to insert the data that is posted in database using updateOne. All data is new, so in every call, insert would happen.
Here is the code (js):
const myCollection = mydb.collection("mydata")
myCollection.updateOne({name:req.data.name},{$set:{name:req.data.name, data:req.data.data}}, {upsert:true}, function(err, result) {console.log("UPDATEONE err: "+err)})
When I call just 1 time this updateOne, it works; 2 times successively, it works. But if I call 2+ times in succession, only the first two ones correctly inserted into database, and the rest, no.
The error that I get after updateOne is, MongoWriteConcernError: No write concern mode named 'majority;' found in replica set configuration. However, I always get this error, also even when the insertion is done correctly. So I don't think this is related to my problem.
Probably you will suggest to me to use updateMany, bulkWrite, etc. and you will be right, but I want to know the reason why after 2+ the insertion is not done.
Have in mind .updateOne() returns a Promise so it should be handled properly in order to avoid concurrency issues. More info about it here.
The error MongoWriteConcernError might be related to the connection string you are using. Check if there is any &w=majority and remove it as recommended here.

GraphQL implement "not permitted"

I have a GraphQL API that is governed by a permission system that I implemented.
I tried going with Graphql-shield but I didn't want to error out on the whole request if the client requests a forbidden field, so instead I implemented my own permission system.
Now, I need to solve a problem:
The way I implemented the permission system means that every field is checked if it is permitted and if it is not then null is returned. However, I would like to return some indication that the field was not actually null but that the field was "not permitted".
I thought about doing it in two ways:
During each check I append to some query-wide variable all fields that are not accessible and return it along with the query (probably in some middleware of some sort)
I extend all of the objects in my schema with a "permitted" field in which I return the value of the permission
Any suggestions ?
IMHO not worth the effort ... api faq or docs (available in graphiql/playground) can contain notice about 'unexpected null', ACL resons etc. It's enough for majority of use cases.
If you still want to include some [debug] info in response extensions are for that, f.e. https://github.com/apollographql/apollo-tracing , - in this case:
just attach a list of 'field access denied' [structured] notices;
collect them (in/from resolver) in some context object, attach in middleware (?), before overal response return;
Make it configurable (debug mode), too.

Loopback 3 discards error information on multiple validation errors, turning 422 to 500, how can I solve that?

I'm migrating from Loopback 2 tot 3.
I currently have an issue with validation errors and strong-error-handler
When I post a bulk create which results in multiple validation errors, those get returned as an array of ValidationErrors.
Those errors get grouped by strong-error handler in a 500 internal server error, which is how it was before, but the details of the errors get discarded, when debug is set to false.
In my example I upload an array of tags, but for each tag, a uniqueness validation is executed. When 2 or more tags are already in the database, I have an array of errors, instead of a single validation error
I need a way to determine why the validation failed on the client side, but the details of the errors are discarded now.
Am I doing something wrong here, or should this be considered as a bug?
From the strongloop error handler documentation in loopback,
In production mode, strong-error-handler omits details from error responses to prevent leaking sensitive information:
More information
For 5xx errors, the output contains only the status code and the status name from the HTTP specification.
For 4xx errors, the output contains the full error message (error.message) and the contents of the details property (error.details) that ValidationError typically uses to provide machine-readable details about validation problems. It also includes error.code to allow a machine-readable error code to be passed through which could be used, for example, for translation.
Am I doing something wrong here, or should this be considered as a bug?
No this is the intended behaviour
Safe error fields
You can set the stack trace as "safe-error-field" so that it will be displayed in production.
For example, the stack field is not displayed by default if you run the loopback in production mode.
If you still want to display the stack field, then change the config json in the server/middleware.json
"final:after": {
"strong-error-handler": {
"params": {
"safeFields": ["stack"]
}
}
}

StackExchange Redis Client StringSet return false?

We have been using StackExchange Redis .Net client for several months without issue. Our logs indicate that StringSet returned false thousands of time over the course of an hour recently, but it is working as expected again.
I can't find what FALSE means anywhere. I assume this means that the value was not put in cache, but if that is correct, how do I tell why? The client is not throwing an exception. Can someone show me the API Specification that describes the return value and how to troubleshoot?
We are running against Redis in Azure if that matters.
result = cache.StringSet(fullKey, value, GetCacheTime(cacheType));
if (!result)
{
if (_logger != null)
{
_logger.LogError( "Failed to Set Cache");
}
}
http://redis.io/commands/set
Simple string reply: OK if SET was executed correctly. Null reply: a
Null Bulk Reply is returned if the SET operation was not performed
becase the user specified the NX or XX option but the condition was
not met.
Though it looks like you are using SETEX (http://redis.io/commands/setex) are you setting a valid timespan as the third argument?
SETEX is atomic, and can be reproduced by using the previous two
commands inside an MULTI / EXEC block. It is provided as a faster
alternative to the given sequence of operations, because this
operation is very common when Redis is used as a cache. An error is
returned when seconds is invalid.

Best practice to pass query conditions in ajax request

I'm writing a REST api in node js that will execute a sql query and send the results;
in the request I need to send the WHERE conditions; ex:
GET 127.0.0.1:5007/users //gets the list of users
GET 127.0.0.1:5007/users
id = 1 //gets the user with id 1
Right now the conditions are passed from the client to the rest api in the request's headers.
In the API I'm using sequelize, an ORM that needs to receive WHERE conditions in a particular form (an object); ex: having the condition:
(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)
this needs to be formatted as a nested object:
-- x=1
-- AND -| -- y=2
| -- OR ----|
| -- z=3
-- OR -|
|
| -- x=3
-- AND -|
-- y=1
so the object would be:
Sequelize.or (
Sequelize.and (
{x=1},
Sequelize.or(
{y=2},
{z=3}
)
),
Sequelize.and (
{x=3},
{y=1}
)
)
Now I'm trying to pass a simple string (like "(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)"), but then I will need a function on the server that can convert the string in the needed object (this method in my opinion has the advantage that the developer writing the client, can pass the where conditions in a simple way, like using sql, and this method is also indipendent from the used ORM, with no need to change the client if we need to change the server or use a different ORM);
The function to read and convert the conditions' string into an object is giving me headache (I'm trying to write one without success, so if you have some examples about how to do something like this...)
What I would like to get is a route capable of executing almost any kind of sql query and give the results:
now I have a different route for everything:
127.0.0.1:5007/users //to get all users
127.0.0.1:5007/users/1 //to get a single user
127.0.0.1:5007/lastusers //to get user registered in the last month
and so on for the other tables i need to query (one route for every kind of request I need in the client);
instead I would like to have only one route, something like:
127.0.0.1:5007/request
(when calling this route I will pass the table name and the conditions' string)
Do you think this solution would be a good solution or you generally use other ways to handle this kind of things?
Do you have any idea on how to write a function to convert the conditions' string into the desired object?
Any suggestion would be appreciated ;)
I would strongly advise you not to expose any part of your database model to your clients. Doing so means you can't change anything you expose without the risk of breaking the clients. One suggestion as far as what you've supplied is that you can and should use query parameters to cut down on the number of endpoints you've got.
GET /users //to get all users
GET /users?registeredInPastDays=30 //to get user registered in the last month
GET /users/1 //to get a single user
Obviously "registeredInPastDays" should be renamed to something less clumsy .. it's just an example.
As far as the conditions string, there ought to be plenty of parsers available online. The grammar looks very straightforward.
IMHO the main disadvantage of your solution is that you are creating just another API for quering data. Why create sthm from scratch if it is already created? You should use existing mature query API and focus on your business logic rather then inventing sthm new.
For example, you can take query syntax from Odata. Many people have been developing that standard for a long time. They have already considered different use cases and obstacles for query API.
Resources are located with a URI. You can use or mix three ways to address them:
Hierarchically with a sequence of path segments:
/users/john/posts/4711
Non hierarchically with query parameters:
/users/john/posts?minVotes=10&minViews=1000&tags=java
With matrix parameters which affect only one path segment:
/users;country=ukraine/posts
This is normally sufficient enough but it has limitations like the maximum length. In your case a problem is that you can't easily describe and and or conjunctions with query parameters. But you can use a custom or standard query syntax. For instance if you want to find all cars or vehicles from Ford except the Capri with a price between $10000 and $20000 Google uses the search parameter
q=cars+OR+vehicles+%22ford%22+-capri+%2410000..%2420000
(the %22 is a escaped ", the %24 a escaped $).
If this does not work for your case and you want to pass data outside of the URI the format is just a matter of your taste. Adding a custom header like X-Filter may be a valid approach. I would tend to use a POST. Although you just want to query data this is still RESTful if you treat your request as the creation of a search result resource:
POST /search HTTP/1.1
your query-data
Your server should return the newly created resource in the Location header:
HTTP/1.1 201 Created
Location: /search/3
The result can still be cached and you can bookmark it or send the link. The downside is that you need an additional POST.

Resources