I want to define a length of a datatype in sequelize.
There is my source code :
var Profile = sequelize.define('profile', {
public_id: Sequelize.STRING,
label: Sequelize.STRING
})
It create a table profiles with a public_id with a datatype varchar(255).
I would like to define a public_id with varchar(32).
I searched on the doc and the stack but couldn't find any answer...
How can I do that please ?
As it is mentioned in the documentation, use:
Sequelize.STRING(32)
First, I think you need to rethink your design just a little bit. The basic point is that length constraints should be meaningful, not just there to save space. PostgreSQL does not store 'A'::varchar(10) any differently than it does 'A'::text (both are stored as variable length text strings, only as long as the value stored, along with a length specifier and some other metadata), so you should use the longest size that can work for you, and use the lengths for substantive enforcement rather than to save space. When in doubt, don't constrain. When you need to make sure it fits on a mailing label, constrain appropriately.
Secondly Dankohn's answer above:
var Profile = sequelize.define('PublicID', {
public_id: {
validate: { len: [0,32] })
is how you would then add such enforcement to the front-end. Again, such enforcement should be based on what you know you need, not just what seems like a good idea at the time, and while it is generally easier to relax constraints than tighten them, for string length, it's really a no-brainer to do things the other way.
As for using such in other applications, you'd probably want to look up the constraint info in the system catalogs, which gets you into sort of advanced territory.
Related
I have no idea if this is a good idea at all, really bad practice, so that's what I'd like to know.
Say I have a graphql server, which sends back stuff like this:
[
{
name: "John"
age: 55
isSingle: false
hasChildren: false
lovesMusic: false #imagine more fields like these
{,
# ...loads of records of the same format here
]
Say the file is large and there are lots of records: Wouldn't it be a good idea to just strip it off the values that are false and only send something back whenever isSingle / hasChildren / lovesMusic is true, in order to save on bandwidth? Or is this a bad idea, making everything prone to error?
So in this example, I'd just send back:
[
{
name: "John"
age: 55
{,
...
]
I am worried because I have large graphql objects coming back from my queries, and lots of it is just fields that aren't filled out and that I thus won't need in my frontend.
To my understanding The whole point of GraphQL is to receive exactly the specified set of types and fields - not more, not less.
Altering the results with some middle ware or deleting fields containing 0, null, false or undefined would be the violation of Graphql Specification. There's no point in using some kind of convention if you are going to violate it anyways.
Neverthless there are some options to consider for performance and bandwidth optimization:
using shorter names
you could use shorter field names in GraphQL schema. you could even come up with some kind of compression algorithm of your own. For example you could replace logical names with characters and then map them back to regular names on the front-end.
{
name: "John"
age: 55
isSingle: false
hasChildren: false
lovesMusic: false
{
would become something like this:
{
n: "John"
a: 55
s: f
hC: f
lM: f
}
then you could map the field names and values back on a front-end sacrificing some performance of course. You just need to have some dictionary object in javascript which stores shorter field names as key-s and longer ones as values:
const adict = {
"hC": "hasChildren",
"lM": "lovesMusic",
"n": "name",
...
}
Althrough this approach will shrink your data and save you some bandwidth, it is not the best practice application-wise. somehow I think it will make your code less flexible and less readable.
external libraries
there quite a few external libraries that could help you with reducing dataload. most popular ones being Message Pack and Protocol buffers. These are very efficient and cleverly built technologies, but researching and integrating them in your solution might take some time.
Is it a bad practice to use graphql input types for query arguments like its commonly done in mutations, for example
createPost(CreatePostInput!): CreatePostPayload
I see that a lot of api's use separate queries to fetch an entity by different fields, for example
userByEmail(email: String!): User
userByName(name: String!): User
as opposed to
user(email: String, name: String): User
This makes sense, but some queries end up requiring more than one argument, for example paginated results. One might need for example, to modify the start/end cursor, results per page, ordering and others, plus the main query might need more than one argument to find the entities, so these queries end up with 5-6 different arguments.
The question is, why don't people use input types for these?
To the answer below, I can't help but wonder why people deem this
query ($category: String, $perPage: Number, $page: Number, $sortBy: String) {
posts(category: $category, perPage: $perPage, page: $page, sortBy: $sortBy) {
...
}
}
friendlier than this
query( $input: PostQueryInput ) {
posts(input: $input) {
...
}
}
Is it because the input types can only contain primitives? I find this really confusing why its better in one case and worse in another.
I know that people are not forced to to this, you can do it as you like, but in the majority of graphql api's I think it is not done like this and I wonder why might that be - people must have a reason not to do this.
I have set the validation rule maxLength for one of my fields in one of my models and it is recognized by sails.js if my input is too long. My problem is that if I check my database it shows a field length of 255 for my mysql db. I would expect it to be the same as my maxLength value.
I've found this on StackOverflow:
SailsJS - How to specify string attribute length without getting error when creating record?
Since this has been opened Sails.js made a lot of progress and so I don't know if custom validation rules is still the way to go. I don't like this because it looks more like a temporary hack rather than a real solution.
Is this expected behavior or a bug?
I know you do not need the answer anymore, but maybe other people do.
You must use the size attribute for this purpose.
On the doc's says:
If supported in the adapter, can be used to define the size of the attribute. For example in MySQL, size can be specified as a number (n) to create a column with the SQL data type: varchar(n).
Just use it like this
attributes: {
name: {
type: 'string',
size: 24
}
}
https://sailsjs.com/documentation/concepts/models-and-orm/attributes#size
What is the preferred way to define an array in a mongoose Schema ?
Here are the two I found, but I am unable to decide which one is the best to use.
var DocumentSchema = new mongoose.Schema({
wayOne: [
{
type: String
}
],
wayTwo: {
type: [String]
},
});
I would prefer the second way, because I would be able to do something like
wayTwo: {
type: [String],
enum: ['one', 'two', 'three'],
default: []
}
and I don't know how to do this with the first way.
In short, I am looking at some old code I didn't write, and saw the two ways in use, so I was wondering I there was something to note about one of the way, or if it would be safe to standardize in converting all to the best way.
It depends on what sort of data you'll have in the array. The answer is subjective of course, because all of the ways you mention work. Which one is best, however, depends on the kind of data/structure you'll need for your model. Do you know the answer to that yet? Maybe with more specifics we can find a better/focused answer for you, but even then, it's still subjective because they all work.
edit
I would use the first one by the way.
I am playing around with node.js, express, and mongoose.
For the sake of getting something up and running right now I am passing the Express query string object directly to a mongoose find function. What I am curious about is how dangerous would this practice be in a live app. I know that a RDBMS would be extremely vulnerable to SQL injection. Aside from the good advice of "sanitize your inputs" how evil is this code:
app.get('/query', function (req, res) {
models.findDocs(req.query, function (err, docs) {
res.send(docs);
});
});
Meaning that a a get request to http://localhost:8080/query?name=ahsteele&status=a would just shove the following into the findDocs function:
{
name: 'ahsteele',
status: 'a'
}
This feels icky for a lot of reasons, but how unsafe is it? What's the best practice for passing query parameters to mongodb? Does express provide any out of the box sanitization?
As far as injection being problem, like with SQL, the risk is significantly lower... albeit theoretically possible via an unknown attack vector.
The data structures and protocol are binary and API driven rather than leveraging escaped values within a domain-specific-language. Basically, you can't just trick the parser into adding a ";db.dropCollection()" at the end.
If it's only used for queries, it's probably fine... but I'd still caution you to use a tiny bit of validation:
Ensure only alphanumeric characters (filter or invalidate nulls and anything else you wouldn't normally accept)
Enforce a max length (like 255 characters) per term
Enforce a max length of the entire query
Strip special parameter names starting with "$", like "$where" & such
Don't allow nested arrays/documents/hashes... only strings & ints
Also, keep in mind, an empty query returns everything. You might want a limit on that return value. :)
Operator injection is a serious problem here and I would recommend you at least encode/escape certain characters, more specifically the $ symbol: http://docs.mongodb.org/manual/faq/developers/#dollar-sign-operator-escaping
If the user is allowed to append a $ symbol to the beginning of strings or elements within your $_GET or $_POST or whatever they will quickly use that to: http://xkcd.com/327/ and you will be a gonner, to say the least.
As far as i know Express doesnt provide any out of box control for sanitization. Either you can write your own Middleware our do some basic checks in your own logic.And as you said the case you mention is a bit risky.
But for ease of use the required types built into Mongoose models at least give you the default sanitizations and some control over what gets into or not.
E.g something like this
var Person = new Schema({
title : { type: String, required: true }
, age : { type: Number, min: 5, max: 20 }
, meta : {
likes : [String]
, birth : { type: Date, default: Date.now }
}
});
Check this for more info also.
http://mongoosejs.com/docs/2.7.x/docs/model-definition.html