How to get and submit characters like letters with accents and 'ñ' in a Loopback 4 API - node.js

I have an API created in Loopback 4 which retrieves data from a database in PostgreSQL 13 encoded with UTF8. Visiting the API explorer (localhost:3000/explorer) and executing the GET requests I realize that even when the database fields contain characters like letters with accents and ñ's; the retrieved JSON only shows blanks in the position where the character must have appeared. For example, if the database has a field with a word like 'piña', the JSON returns 'pi a'.
When I try a POST request, inserting a field like 'ramírez' (note the í), in the database, the field is shown as 'ramφrez' and when I execute a GET of that entry, the JSON now has de correct value, 'ramírez'.
How can I fix that?

I'd recommend using the Buffer class:
var encodedString = Buffer.from('string', 'utf-8');
with this way you will be able to return anything you want. In NodeJS Buffer class already included so you don't need to install any dependencies.
If you don't get what you need you can change 'utf-8' part.

Related

Passing query string in a GET request, express converts '+' character in query string into space. How to avoid this?

I am passing certain parameters in query string in a GET request let's say
HOST?email=john**+1#gmail.com. When I try to access these in node express server through req.query.email, I get the value 'john 1#gmail.com'. Express is converting '+**' character into space character. Is there a way that I can stop this encoding?
If you want to use the literal + sign, you need to URL encode it to %2b, e.g.
HOST?email=john**%2b1#gmail.com.
That will decoded correctly by express, e.g.
req.query.email: john**+1#gmail.com.

How to copy S3 object with special character in key

I have objects in an S3 bucket, and I do not have control over the names of the keys. Some of these keys have special characters and AWS SDK does not like them.
For example, one object key is: folder/‍Johnson, Scott to JKL-Discovery.pdf, it might look fine at first glance, but if I URL encode it: folder%2F%E2%80%8DJohnson%2C+Scott+to+JKL-Discovery.pdf, you can see that after folder/ (or folder%2F when encoded) there is a random sequence of characters %E2%80%8D before Johnson.
It is unclear where these characters come from, however, I need to be able to handle this use case. When I try to make a copy of this object using the Node.js AWS SDK,
const copyParams = {
Bucket,
CopySource,
Key : `folder/‍Johnson, Scott to JKL-Discovery.pdf`
};
let metadata = await s3.copyObject(copyParams).promise();
It fails and can't find the object, if I encodeURI() the key, it also fails.
How can I deal with this?
DO NOT SUGGEST I CHANGE THE ALLOWED CHARACTERS IN THE KEY NAME. I DO NOT HAVE CONTROL OVER THIS
Faced the same problem but with PHP. copyObject() method is automatically encoding destination parameters (Bucket and Key) parameters, but not source parameter (CopySource) so it has to be encoded manually. In php it looks like this:
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => $targetFilePath,
'CopySource' => $s3::encodeKey("{$sourceBucket}/{$sourceFilePath}"),
]);
I'm not familiar with node.js but there should also exist that encodeKey() method that can be used?
trying your string, there's a tricky 'zero width space' unicode char...
http://www.ltg.ed.ac.uk/~richard/utf-8.cgi?input=342+200+213&mode=obytes
I would sanitize the string removing unicode chars and then proceding with url encoding as requested by official docs.
encodeURI('folder/‍johnson, Scott to JKL-Discovery.pdf'.replace(/[^\x00-\x7F]/g, ""))

Best practice to pass query conditions in ajax request

I'm writing a REST api in node js that will execute a sql query and send the results;
in the request I need to send the WHERE conditions; ex:
GET 127.0.0.1:5007/users //gets the list of users
GET 127.0.0.1:5007/users
id = 1 //gets the user with id 1
Right now the conditions are passed from the client to the rest api in the request's headers.
In the API I'm using sequelize, an ORM that needs to receive WHERE conditions in a particular form (an object); ex: having the condition:
(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)
this needs to be formatted as a nested object:
-- x=1
-- AND -| -- y=2
| -- OR ----|
| -- z=3
-- OR -|
|
| -- x=3
-- AND -|
-- y=1
so the object would be:
Sequelize.or (
Sequelize.and (
{x=1},
Sequelize.or(
{y=2},
{z=3}
)
),
Sequelize.and (
{x=3},
{y=1}
)
)
Now I'm trying to pass a simple string (like "(x=1 AND (y=2 OR z=3)) OR (x=3 AND y=1)"), but then I will need a function on the server that can convert the string in the needed object (this method in my opinion has the advantage that the developer writing the client, can pass the where conditions in a simple way, like using sql, and this method is also indipendent from the used ORM, with no need to change the client if we need to change the server or use a different ORM);
The function to read and convert the conditions' string into an object is giving me headache (I'm trying to write one without success, so if you have some examples about how to do something like this...)
What I would like to get is a route capable of executing almost any kind of sql query and give the results:
now I have a different route for everything:
127.0.0.1:5007/users //to get all users
127.0.0.1:5007/users/1 //to get a single user
127.0.0.1:5007/lastusers //to get user registered in the last month
and so on for the other tables i need to query (one route for every kind of request I need in the client);
instead I would like to have only one route, something like:
127.0.0.1:5007/request
(when calling this route I will pass the table name and the conditions' string)
Do you think this solution would be a good solution or you generally use other ways to handle this kind of things?
Do you have any idea on how to write a function to convert the conditions' string into the desired object?
Any suggestion would be appreciated ;)
I would strongly advise you not to expose any part of your database model to your clients. Doing so means you can't change anything you expose without the risk of breaking the clients. One suggestion as far as what you've supplied is that you can and should use query parameters to cut down on the number of endpoints you've got.
GET /users //to get all users
GET /users?registeredInPastDays=30 //to get user registered in the last month
GET /users/1 //to get a single user
Obviously "registeredInPastDays" should be renamed to something less clumsy .. it's just an example.
As far as the conditions string, there ought to be plenty of parsers available online. The grammar looks very straightforward.
IMHO the main disadvantage of your solution is that you are creating just another API for quering data. Why create sthm from scratch if it is already created? You should use existing mature query API and focus on your business logic rather then inventing sthm new.
For example, you can take query syntax from Odata. Many people have been developing that standard for a long time. They have already considered different use cases and obstacles for query API.
Resources are located with a URI. You can use or mix three ways to address them:
Hierarchically with a sequence of path segments:
/users/john/posts/4711
Non hierarchically with query parameters:
/users/john/posts?minVotes=10&minViews=1000&tags=java
With matrix parameters which affect only one path segment:
/users;country=ukraine/posts
This is normally sufficient enough but it has limitations like the maximum length. In your case a problem is that you can't easily describe and and or conjunctions with query parameters. But you can use a custom or standard query syntax. For instance if you want to find all cars or vehicles from Ford except the Capri with a price between $10000 and $20000 Google uses the search parameter
q=cars+OR+vehicles+%22ford%22+-capri+%2410000..%2420000
(the %22 is a escaped ", the %24 a escaped $).
If this does not work for your case and you want to pass data outside of the URI the format is just a matter of your taste. Adding a custom header like X-Filter may be a valid approach. I would tend to use a POST. Although you just want to query data this is still RESTful if you treat your request as the creation of a search result resource:
POST /search HTTP/1.1
your query-data
Your server should return the newly created resource in the Location header:
HTTP/1.1 201 Created
Location: /search/3
The result can still be cached and you can bookmark it or send the link. The downside is that you need an additional POST.

Wrong time format fetched

I am using Node for fetching data from MySQL. In database, i got record like : 2013-08-13 15:44:53 . But when Node fetches from Database , it assigns value like 2013-08-19T07:54:33.000Z.
I just need to get time format as in MySQL table. Btw ( My column format is DateTime in MySQL)
In Node :
connection.query(post, function(error, results, fields) {
userSocket.emit('history :', {
'dataMode': 'history',
msg: results,
});
});
When retrieving it from the database you most likely get a Date object which is exactly what you should work with (strings are only good to display dates, but working on a string representation of a date is nothing you want to do).
If you need a certain string representation, create it based on the data stored in the Date object - or even better, get some library that adds a proper strftime-like method to its prototype.
The best choice for such a library is moment.js which allows you to do this to get the string format you want:
moment('2013-08-19T07:54:33.000Z').format('YYYY-MM-DD hh:mm:ss')
// output (in my case on a system using UTC+2 as its local timezone):
// "2013-08-19 09:54:33"
However, when sending it through a socket (which requires a string representation) it's a good idea to use the default one since you can pass it to the new Date(..) constructor on the client side and get a proper Date object again.

Should I Parse JSON Data Before Inserting To MongoDB?

So, I am receiving some JSON data from a client to my Node.JS server. I want to insert that json into my MongoDB instance using Mongoose.
I can insert the JSON as-is, and it works great, because it's just text. However, I want to parse it before insertion so that when I extract it later it will be all nice and neat.
So, this works:
wordStream.words.push(wordData);
And this doesn't:
wordStream.words.push(JSON.parse(wordData));
So, should I even want to parse the JSON before insertion?
And if I should parse the JSON, how do I do it without throwing an error? I need to put everything in double quotes "", I believe, before it will parse, but for some reason whenever I make a string with double quotes and parse it, it turns everything all wrong.
Here is the JSON:
{ word: 'bundle',
definitions:
[ { definition: 'A group of objects held together by wrapping or tying.',
partOfSpeech: 'noun' } ],
urlSource: 'testurl',
otherSource: '' }
And the error when I try to parse
/Users/spence450/Documents/development/wordly-dev/wordly-server/node_modules/mongoose/lib/utils.js:409
throw err;
^
SyntaxError: Unexpected token o
Ideas?
So, should I even want to parse the JSON before insertion?
Convert the strings to JSON objects will benefit you later, when you need to make queries in your MongoDB database.
And if I should parse the JSON, how do I do it without throwing an error? I need to put everything in double quotes "", I believe, before it will parse, but for some reason whenever I make a string with double quotes and parse it, it turns everything all wrong.
You aren't receiving JSON documents. JSON documents must contain the keys quoted.
You can:
Use a library that recognizes invalid JSON objects (please don't)
Use eval (this is a security issue, so don't do it)
Fix the source of the problem, creating real JSON objects. This isn't difficult, you can see the JSON features here

Resources