Which Browser SQL Database to use? - browser

Alright, so I need to implement a fairly large local database for the iOS and Android mobile browsers (~30 MB). I am researching the options and it looks like WebSQL (the option I wanted to use) is being actively abandoned. Also, it looks like IndexedSQL is not fully supported.
What do you recommend for a local browser database? Thanks!

IndexedDB with use of IndexedDBShim (a polyfill for WebSQL), for sure looks fulfills your requirement. But do notice, that IOS devices allows maximum of 50 MB storage.
I have worked on the similar requirement as of yours and this combination worked across all modern browsers.

I don't think you have an other choice than using the indexeddb. WebSQL is deprecated and localstorage is to small and not performante to serve the needs.
I wrote a library that implements a linq like interface. By using methods you can easily query the database. Example:
linq2indexeddb.from("store").where("field").equals("value").select()
Because the indexededdb is async you will get back a promise.
You can find my library at codeplex

I'm replying to this in 2016 (2 years after you asked this question) and everything concerning the deprecation of WebSQL still stands. IndexedDB on the other hand, enjoys the support of all of the major browser vendors.
Now would be a good time to state that "IndexedSQL" is neither an alternative name for IndexedDB, nor a name of any other existing client-side database :) . Pointing this out may seem a bit pedantic, but it's not: IndexedDB is a non-relational document-store, and as such does not natively support SQL.
Regardless of what you call it, IndexedDB is currently the only database on the W3C standards track, and as such is the only future-proof option for anyone tasked with choosing a client-side database.
As implied by GemK, however, such a decision isn't one that necessarily has to be made; one can simply choose (or make) a library which utilizes whichever database is available on a client machine.
BakedGoods differs from such libraries already suggested here in several ways; most pertinently, it allows the storage type(s) that are to be utilized to be explicitly specified, in turn allowing the developer to introduce other factors (such as performance characteristics) in to the decision-making process.
With it, conducting storage operations in whichever of the database types is supported is a matter of...
... specifying the appropriate operation options and equivalent configs for both database types:
//If the operation is a set(), and the referenced structures
//don't exist, they will be created automatically.
var webSQLOptionsObj = {
databaseName: "Example_DB",
databaseDisplayName: "Example DB",
databaseVersion: "",
estimatedDatabaseSize: 1024 * 1024,
tableData: {
name: "Main",
keyColumnName: "lastName",
columnDefinitions: "(lastName TEXT PRIMARY KEY, firstName TEXT)"
},
tableIndexDataArray: [name: "First_Name_Index", columnNames: "(firstName)"]
};
var indexedDBOptionsObj = {
databaseName: "Example_DB",
databaseVersion: 1,
objectStoreData: {
name: "Main",
keyPath: lastName,
autoIncrement: false
},
objectStoreIndexDataArray: [
{name: "First_Name_Index", keyPath: "firstName", unique: false, multiEntry: false}
],
};
var optionsObj = {
conductDisjointly: false,
webSQL: webSQLOptionsObj,
indexedDB: indexedDBOptionsObj
};
... and conducting the operation:
bakedGoods.set({
data: [
{value: {lastName: "Obama", firstName: "Barack"}},
{value: {lastName: "Biden", firstName: "Joe"}}
],
storageTypes: ["indexedDB", "webSQL"],
options: optionsObj,
complete: function(byStorageTypeStoredItemRangeDataObj, byStorageTypeErrorObj){}
});
Its simple interface and unmatched storage facility support comes at the cost of lack of support for some storage facility-specific configurations. For instance, it does not support the conduction of storage operations in WebSQL tables with multi-column primary keys.
So if you make heavy use of those types of features, you may want to look elsewhere.
Oh, and for the sake of complete transparency, BakedGoods is maintained by yours truly :) .

Related

How do I know which fields are indexed in pouchdb if I use query() API?

I am new to pouchdb and I am reading below source code:
db.query('product_index', {
startkey: ["01234"],
endkey: ["01234", {}],
include_docs: false
});
this code executes for a long time. After read some pouchdb document it looks like it builds index on the database when it run the first time. But I don't understand which fields are indexed based on above code.
Below code I can see it builds index on field foo. But how can I understand query API for building index? What is the different between using query and createIndex from index perceptive?
db.createIndex({
index: {
fields: ['foo']
}
})
Have you seen the PouchDB Guide Bulk operations section Please use 'allDocs()'. Seriously.?
Far too many developers overlook this valuable API, because they
misunderstand it. When a developer says "my PouchDB app is slow!", it
is usually because they are using the slow query() API when they
should be using the fast allDocs() API.
When designing your data structures it's very important to bear that in mind. You should define your record id fields to optimize data accessibility through allDocs().

CouchDB Read Configuration from design document

I would like to store a value in the config file and look it up in the design document for comparing against update values. I'm sure I have seen this but, for the life of me, I can't seem to remember how to do this.
UPDATE
I realize (after the first answer) that there was more than one way to interpret my question. Hopefully this example clears it up a little. Given a configuration:
curl -X PUT http://localhost:5984/_config/shared/token -d '"0123456789"'
I then want to be able to look it up in my design document
{
"_id": "_design/loadsecrets",
"validate_doc_update": {
"test": function (newDoc,oldDoc) {
if (newDoc.supersecret != magicobject.config.shared.token){
throw({unauthorized:"You don't know the super secret"});
}
}
}
}
It's the abilitly to do something like the magicobject.config.shared.token that I am looking for.
UPDATE 2
Another potentially useful (contrived) scenario
curl -X PUT http://trustedemployee:5984/_config/eventlogger/detaillevel -d '"0"'
curl -X PUT http://employee:5984/_config/eventlogger/detaillevel -d '"2"'
curl -X PUT http://vicepresident:5984/_config/eventlogger/detaillevel -d '"10"'
Then on devices tracking employee behaviour:
{
"_id": "_design/logger",
"updates": {
"logger": function (doc,req) {
if (!doc) {
doc = {_id:req.id};
}
if(req.level < magicobject.config.eventlogger.detaillevel ){
doc.details = req.details;
}
return [doc, req.details];
}
}
}
Here's a follow-up to my last answer with more general info:
There is no general way to use configuration, because CouchDB is designed with scalability, stability and predictability in mind. It has been designed using many principles of functional programming and pure functions, avoiding side effects as much as possible. This is a Good Thing™.
However, each type of function has additional parameters that you can use, depending on the context the function is called with:
show, list, update and filter functions are executed for each request, so they get the request object. Here you have the req.secObj and req.userCtx to (ab)use for common configuration. Also, AFAIK the this keyword is set to the current design document, so you can use the design doc to get common configuration (at least up to CouchDB 1.6 it worked).
view functions (map, reduce) don't have additional parameters, because the results of a view are written to disk and reused in subsequent calls. map functions must be pure (so don't use e.g. Math.random()). For shared configuration across view functions within a single design doc you can use CommonJS require(), but only within the views.lib key.
validate doc update functions are not necessarily executed within a user-triggered http request (they are called before each write, which might not be triggered only via http). So they have the userCtx and secObj added as separate parameters in their function signature.
So to sum up, you can use the following places for configuration:
userCtx for user-specific config. Use a special role (e.g. with a prefix) for storing small config bits. For example superLogin does this.
secObj for database-wide config. Use a special member name for small bits (as you should normally use roles instead of explicit user names, secObj.members.names or secObj.admins.names is a good place).
the design doc itself for design-doc-wide config. Best use the this.views.lib.config for this, as you can also read this key from within views. But keep in mind that all views are invalidated as soon as you change this key. So if the view results will stay the same no matter what the config values are, it might be better to use a this.config key.
Hope this helps! I can also add examples if you wish.
I think I know what you're talking about, and if I'm right then what you are asking for is no longer possible. (at least in v1.6 and v2.0, I'm not sure when this feature was removed)
There was a lesser-known trick that allowed a view/show/list/validation/etc function to access the parent design document as this in your function. For example:
{
"_id": "_design/hello-world",
"config": {
"PI": 3.14
},
"views": {
"test": {
"map": "function (doc) { emit(this.config.PI); })"
}
}
}
This was a really crazy idea, and I imagine it was removed because it created a circular dependency between the design document and the code of the view that made the process of invalidating/rebuilding a view index a very tricky affair.
I remember using this trick at some point in the distant past, but the feature is definitely gone now. (and likely to never return)
For your special use-case (validating a document with a secret token), there might be a workaround, but I'm not sure if the token might leak in some place. It all depends what your security requirements are.
You could abuse the 4th parameter to validate_doc_update, the securityObject (see the CouchDB docs) to store the secret token as the first admin name:
{
"test": "function (newDoc, oldDoc, userCtx, secObj) {
var token = secObj.admins.names[0];
if (newDoc.supersecret != token) {
throw({unauthorized:"You don't know the super secret"});
}
}"
}
So if you set the db's security object to {admins: {names: ["s3cr3t-t0k3n"], roles: ["_admin"]}}, you have to pass 's3cr3t-t0k3n' as the doc's supersecret property.
This is obviously a dirty hack, but as far as I remember, the security object may only be read or modified by admins, you wouldn't immediately leak your token to the public. But consider adding a separate layer between the CouchDB and your caller if you need "real" security.

Using sequelize.js to interface against existing database?

I'm currently working in a project where our Node.js server will perform a lot of interactions against an existing MySQL database. Thus I'm wondering if Sequelize is a good library to interface the database. From what I've read about it, it is most often used as a master of the database. But in my case it will only have select,insert,delete access and not access to modify and create tables and so on. Does Sequelize support this method of interaction with a database?
If Sequelize does indeed work good for this, what settings do i need to disable to not run into much trouble? After reading their documentation i could not find any global settings to turn it into a simple interface tool. Timestamps and such could be disabled on table definition but not globally what I saw. Any input is greatly appreciated.
There are a lot of questions in this post, I'll try to answer them all:
Disable timestamps globally:
new Sequelize(... ,{
define: {
timestamps: false
}
});
You can pass any define options to the sequelize constructor and they will be applied to all calls to sequelize.define
Mapping to an existing database
I'll try to describe some common cases here:
I want my model to have a different name to my database table:
sequelize.define('name of model', attributes, {
tableName: 'name of table'
});
My database columns are called something different than the attributes in my model:
sequelize.define('name of model', {
name_of_attribute_in_model: {
type: ...
field: 'name of field in table'
}
});
My primary key is not called id:
sequelize.define('name of model', {
a_field_totally_not_called_id: {
primaryKey: true // also allows for composite primary keys, even though the support for composite keys accross associations is spotty
autoIncrement: true
}
});
My foreign keys are called something different
X.belongsTo(Y, { foreignKey: 'something_bla' });
disclaimer: I am a sequelize maintainer :). Overall I think we have pretty good support for working with legacy DBs. Feel free to ask more questions here or on irc://irc.freenode.net#sequelizejs

Difference between MongoDB and Mongoose

I wanted to use the mongodb database, but I noticed that there are two different databases with either their own website and installation methods: mongodb and mongoose. So I came up asking myself this question: "Which one do I use?".
So in order to answer this question I ask the community if you could explain what are the differences between these two? And if possible pros and cons? Because they really look very similar to me.
I assume you already know that MongoDB is a NoSQL database system which stores data in the form of BSON documents. Your question, however is about the packages for Node.js.
In terms of Node.js, mongodb is the native driver for interacting with a mongodb instance and mongoose is an Object modeling tool for MongoDB.
mongoose is built on top of the mongodb driver to provide programmers with a way to model their data.
EDIT:
I do not want to comment on which is better, as this would make this answer opinionated. However I will list some advantages and disadvantages of using both approaches.
Using mongoose, a user can define the schema for the documents in a particular collection. It provides a lot of convenience in the creation and management of data in MongoDB. On the downside, learning mongoose can take some time, and has some limitations in handling schemas that are quite complex.
However, if your collection schema is unpredictable, or you want a Mongo-shell like experience inside Node.js, then go ahead and use the mongodb driver. It is the simplest to pick up. The downside here is that you will have to write larger amounts of code for validating the data, and the risk of errors is higher.
Mongo is NoSQL Database.
If you don't want to use any ORM for your data models then you can also use native driver mongo.js: https://github.com/mongodb/node-mongodb-native.
Mongoose is one of the orm's who give us functionality to access the mongo data with easily understandable queries.
Mongoose plays as a role of abstraction over your database model.
One more difference I found with respect to both is that it is fairly easy to connect to multiple databases with mongodb native driver while you have to use work arounds in mongoose which still have some drawbacks.
So if you wanna go for a multitenant application, go for mongodb native driver.
From the first answer,
"Using Mongoose, a user can define the schema for the documents in a particular collection. It provides a lot of convenience in the creation and management of data in MongoDB."
You can now also define schema with mongoDB native driver using
##For new collection
db.createCollection("recipes",
validator: { $jsonSchema: {
<<Validation Rules>>
}
}
)
##For existing collection
db.runCommand({
collMod: "recipes",
validator: { $jsonSchema: {
<<Validation Rules>>
}
}
})
##full example
db.createCollection("recipes", {
validator: {
$jsonSchema: {
bsonType: "object",
required: ["name", "servings", "ingredients"],
additionalProperties: false,
properties: {
_id: {},
name: {
bsonType: "string",
description: "'name' is required and is a string"
},
servings: {
bsonType: ["int", "double"],
minimum: 0,
description:
"'servings' is required and must be an integer with a minimum of zero."
},
cooking_method: {
enum: [
"broil",
"grill",
"roast",
"bake",
"saute",
"pan-fry",
"deep-fry",
"poach",
"simmer",
"boil",
"steam",
"braise",
"stew"
],
description:
"'cooking_method' is optional but, if used, must be one of the listed options."
},
ingredients: {
bsonType: ["array"],
minItems: 1,
maxItems: 50,
items: {
bsonType: ["object"],
required: ["quantity", "measure", "ingredient"],
additionalProperties: false,
description: "'ingredients' must contain the stated fields.",
properties: {
quantity: {
bsonType: ["int", "double", "decimal"],
description:
"'quantity' is required and is of double or decimal type"
},
measure: {
enum: ["tsp", "Tbsp", "cup", "ounce", "pound", "each"],
description:
"'measure' is required and can only be one of the given enum values"
},
ingredient: {
bsonType: "string",
description: "'ingredient' is required and is a string"
},
format: {
bsonType: "string",
description:
"'format' is an optional field of type string, e.g. chopped or diced"
}
}
}
}
}
}
}
});
Insert collection Example
db.recipes.insertOne({
name: "Chocolate Sponge Cake Filling",
servings: 4,
ingredients: [
{
quantity: 7,
measure: "ounce",
ingredient: "bittersweet chocolate",
format: "chopped"
},
{ quantity: 2, measure: "cup", ingredient: "heavy cream" }
]
});
Mongodb and Mongoose are two different drivers to interact with MongoDB database.
Mongoose : object data modeling (ODM) library that provides a rigorous modeling environment for your data. Used to interact with MongoDB, it makes life easier by providing convenience in managing data.
Mongodb: native driver in Node.js to interact with MongoDB.
mongo-db is likely not a great choice for new developers.
On the other hand mongoose as an ORM (Object Relational Mapping) can be a better choice for the new-bies.
If you are planning to use these components along with your proprietary code then please refer below information.
Mongodb:
It's a database.
This component is governed by the Affero General Public License (AGPL) license.
If you link this component along with your proprietary code then you have to release your entire source code in the public domain, because of it's viral effect like (GPL, LGPL etc)
If you are hosting your application over the cloud, the (2) will apply and also you have to release your installation information to the end users.
Mongoose:
It's an object modeling tool.
This component is governed by the MIT license.
Allowed to use this component along with the proprietary code, without any restrictions.
Shipping your application using any media or host is allowed.
Mongoose is built untop of mongodb driver, the mongodb driver is more low level. Mongoose provides that easy abstraction to easily define a schema and query. But on the perfomance side Mongdb Driver is best.
Mongodb and Mongoose are two completely different things!
Mongodb is the database itself, while Mongoose is an object modeling tool for Mongodb
EDIT: As pointed out MongoDB is the npm package, thanks!
MongoDB is The official MongoDB Node.js driver allows Node.js applications to connect to MongoDB and work with data.
On the other side Mongoose it other library build on top of mongoDB. It is more easier to understand and use. If you are a beginner than mongoose is better for you to work with.

Querying JsonRest without HTTP requests for data

I'm using OnDemandGrid with JsonRest store to retrieve data from RESTful API and to show it on the table. The table is rather complex and all JsonRest CRUD methods are used.
Here is the basic structure I'm using:
JsonRest:
...
var restStore = Observable(Cache(JsonRest({
target:"source",
idProperty: "id"
}), Memory()));
...
OnDemandGrid:
...
var grid = new (declare([OnDemandGrid, Selection, Keyboard]))({
sort: "name",
store: restStore,
columns: [
{field: "name", label: "Name"},
{field: "state", label: "State"},
{field: "city", label: "city"}
],
loadingMessage: "Loading data...",
noDataMessage: "No data"
}, "grid");
grid.startup();
...
I want to filter the data on client side without sending HTTP requests. Can you give me some ideas to solve this issue?
Own resarch:
Dgrid tutorial stands on the fact that all depends on the dojo-store.
When dgrid interacts with a store, all paging, filtering, and sorting responsibilities fall upon the store, not the grid. ... When encountering data rendering issues, always check that the store implementation (and backend service, if applicable) are performing as expected.
So this means I have to resolve this issue on store side. I suppose, I have to extend QueryResults of JsonRest store but I am hitting the wall all the time.
I have also thought to query against Cache - but I loose the JsonRest then...
If you're basically interested in initially retrieving a data payload from your service all at once up-front but then do all sorting/filtering/paging client-side, have a look at dojo-smore/RequestMemory - you pass it a url and it basically acts like a Memory store once it fetches data from the URL, except its methods return promises rather than immediate values.

Resources