Prevent insert on Client from Server's `allow` - node.js

I have isServer code:
coll.allow({ insert: isAllowed });
On the client, you can perform an insert. There is a template that uses something like:
Template.foo.coll = function () { return coll.find({}); };
This displays all of the coll documents in the template as intended.
However, if isAllowed returns false, the client will still insert the new document in its own collection (seemingly) even though the server does not.
What this means is that if you insert a new item, it will appear on the client until you refresh (because it won't be loaded from the server since it was not inserted there).
How can I prevent the insert from occurring on the client as well as the server? Is it safe to move coll.allow to isClient instead of isServer, or do I need it for both?

Related

NodeJs,ExpressJS Running functions in background

im asking this question because i dont know what to look for right now and my googling wasnt great so far.
I am making nodejs,express,sql app that scrape website. It takes 30 to 120 seconds to scrape whole category. How to make that function run in the background without blocking website. Frontend template engine is eJS. If its not possible to do with eJS which framework,library should i use then? I imagine it work like that
User go to /scrape
Choose category and send to server by clicking button
Some container on /scrape gets greyed out with circle rotating or
other % or smth
User can freely leave /scrape and click around webiste or just stay
on /scrape waiting for result
When user cames back to /scrape the results are there or when he
stayed results shows up with or without reloading the page
Getting full respond to these questions will be very helpfull. But just keywords for me to look up also are very helpfull
Sorry for bad english
For your case here you could use redis, or just store the data you scrape on an data structrue that you like (in my opinion, because of the category, hashmaps (js objects) are the best here) directly in nodejs. The process would then look like this:
User goes to /scrape and selects a category
Backend checks if that category was already scraped (e.g. checks for the data in the hashmap with the category name as key)
If the data exists (just check if the key is defined), then send the data to the user, else (if the data isn't stored, e.g. key == undefined), send the user a message that the data is beign scraped and just run the scrape funtion in the backround. The scrape function than scrapes the data, and if it is done, it pushes the data with the category key to the hashmap. To avoid the same categorys beign scraped at the same time, you could add a "pending" property to the hashmap. So if the user accesses the /scrape route, you check in the hashmap if the category key exsists, if yes and pending is false, send data, if yes and pending is true, send wait alert, if the key doesn't exists, start the scrape function and send a wait alter.
Additionally, to make the whole thing "live", you could use socket.io (https://socket.io/) to implement websockets. You could then send the scraped data to the user without the user having to reload the page to check if the scrape process is done.
I made a little exmaple, that doesn't implement scraping, but should make the whole logic here a little bit easier to understand. I also added some explenation to the code in form of comments.
const express = require("express");
const app = express()
// the data hashmap
const data = {};
// scrape function
const scrape = async (id) => {
// set pending to true to prevent multiple scraped on same category
data[id] = { pending: true, data: {} }
// this would be your scrape functio, I used a promise here that
// resolves after 5 seconds with an random number just for
// simplicity
const a = await new Promise((res, rej) => {
setTimeout(() => { res(Math.floor(Math.random()*1000)) }, 5000)
})
// if the data was scraped, set pending to false and add the data
data[id].pending = false;
data[id].data = { id: a }
}
// "scrape" route
app.get("/:id", async (req, res) => {
const { id } = req.params; // if would represent category
// check if id (category) is not in hashmap, if not, then
// start the scrape process and send a wait alert
if (data[id] == undefined) {
scrape(id);
res.send("scraping...")
// if the data is already beign scraped, send a wait alert
// the pending property prvents that multiple people trigger
// the scrape of the same category
} else if (data[id].pending == true) {
res.send("still scraping...")
// lastly, if the data is defined, and is not pending, then
// you could just send it
} else {
res.send(data[id].data)
}
})
// to test this, go to the root with any id, could be string, number,
// whatever (e.g. /1337 or /helloworld), wait for 5 seceonds (or
// leave and come back after 5 seconds), refresh the page and you can
// see the random number. If you now go to an other route (e.g /test)
// and go back to the last one, you still can see the data, if you again
// wait for 5 seconds and then go back to /test, you can see the data.
// You can also open multiple tabs at the same time, which means the
// scraping is asynchronous, so you don't have to wait for the
// one category to be scraped to scrape the next
app.listen(5000)

Using Node.js & NForce, Retrieve Records on Startup

I am attempting to modify this code:
https://github.com/jeffdonthemic/node-streaming-socketio/blob/master/app.js
So that as well as setting up the streaming, on startup it also performs a query to retrieve some records which are displayed to the user. I am attempting to place code like so:
var q = 'SELECT Id, Name, CreatedDate, BillingCity FROM Account';
sfdc.query({ query: q }, function(err, resp){
if(!err && resp.records) {
// do something with record, display to client
}
});
After the connection and ouath have been set up, however, I am unsure where to place that code in the lifecycle of the main application (app.js). Also, I am unsure how to pass this record once I have it so that it can be displayed on the client side. I keep getting this error when placing this code at various places in app.js:
TypeError: Cannot read property 'instance_url' of undefined

Meteor cannot retrieve data from MongoDB

Pretty straightforward and without any sort of configuration ->
In the project directory I entered the command:
$ meteor mongo to access the mongodb
From there (mongo shell), I switched to db meteor using the command use meteor then entered some basic data to test:
j = { name: "mongo" }
k = { x: 3 }
db.testData.insert(j)
db.testData.insert(k)
I checked and got results by entering: db.testData.find()
Here's my meteor code provided that mongodb access is only required on the client:
if (Meteor.isClient) {
Template.hello.greeting = function () {
return "Welcome to test.";
};
Template.hello.events({
'click input' : function () {
// template data, if any, is available in 'this'
if (typeof console !== 'undefined')
console.log("You pressed the button");
}
});
Documents = new Meteor.Collection('testData');
var document = Documents.find();
console.log(document);
var documentCbResults = Documents.find(function(err, items) {
console.log(err);
console.log(items);
});
}
Upon checking on the browser and based on the logs, it says undefined. I was unsuccessful from retrieving data from mongodb and showing to the client console.
What am I missing?
For this answer I'm going to assume this is a newly created project with autopublish still on.
As Christian pointed out, you need to define Documents on both the client and the server. You can easily accomplish this by just putting the collection definition at the top of the file or in another file which isn't in either of the server or client directories.
An example which prints the first two test documents could look like this:
Documents = new Meteor.Collection('testData');
if (Meteor.isClient) {
Template.hello.greeting = function () {
return "Welcome to apui.";
};
Template.hello.events({
'click input' : function () {
var documents = Documents.find().fetch();
console.log(documents[0]);
console.log(documents[1]);
}
});
}
Note the following:
The find function returns a cursor. This is often all you want when writing template code. However, in this case we need direct access to the documents to print them so I used fetch on the cursor. See the documentation for more details.
When you first start the client, the server will read the contents of the defined collections and sync all documents (if you have autopublish on) to the client's local minimongo database. I placed the find inside of the click event to hide that sync time. In your code, the find would have executed the instant the client started and the data probably would not have arrived in time.
Your method of inserting initial items into the database works (you don't need the use meteor by the way), however mongo will default to using an ObjectId instead of a string as the _id. There are subtle ways that this can be annoying in a meteor project, so my recommendation is to let meteor insert your data if at all possible. Here is some code that will ensure the testData collection has some documents:
if (Meteor.isServer) {
Meteor.startup(function() {
if (Documents.find().count() === 0) {
console.log('inserting test data');
Documents.insert({name: "mongo"});
Documents.insert({x: 3});
}
});
}
Note this will only execute if the collection has no documents in it. If you ever want to clear out the collection you can do so via the mongo console. Alternatively you can drop the whole database with:
$ meteor reset
It's not enough to only define collections on the client side. Your mongo db lives on the server and your client needs to get its data from somewhere. It doesn't get it directly from mongodb (I think), but gets it via syncing with the collections on the server.
Just define the Documents collection in the joint scope of client and server. You may also need to wait for the subscription to Documents to complete before you can expect content. So safer is:
Meteor.subscribe('testData', function() {
var document = Documents.find();
console.log(document);
});

Node.js server side connection to Socket.io

I have a Node.js application with a frontend app and a backend app, the backend will manage the list and "push" an update to the frontend app, the call to the frontend app will trigger a list update so that all clients receive the correct list data.
The problem is on the backend side, when I press the button, I perform an AJAX call, and that AJAX call will perform the following code (trimmed some operations out of it):
Lists.findOne({_id: active_settings.active_id}, function(error, lists_result) {
var song_list = new Array();
for (i=0; i < lists_result.songs.length; i++) {
song_list.push(lists_result.songs[i].ref);
}
Song.find({
'_id': {$in: song_list}
}, function(error, songs){
// DO STUFF WITH THE SONGS
// UPDATE SETTINGS (code trimmed)
active_settings.save(function(error, updated_settings) {
list = {
settings: updated_settings,
};
var io = require('socket.io-client');
var socket = io.connect(config.app_url);
socket.on('connect', function () {
socket.emit('update_list', {key: config.socket_key});
});
response.json({
status: true,
list: list
});
response.end();
}
});
});
However the response.end never seems to work, the call keeps hanging, further more, the list doesn't always get refreshed so there is an issue with the socket.emit code. And the socket connection stays open I assume because the response isn't ended?
I have never done this server side before so any help would be much appreciated. (the active_settings etc exists)
I see some issues that might or might not be causing your problems:
list isn't properly scoped, since you don't prefix it with var; essentially, you're creating a global variable which might get overwritten when there are multiple requests being handled;
response.json() calls .end() itself; it doesn't hurt to call response.end() again yourself, but not necessary;
since you're not closing the socket(.io) connection anywhere, it will probably always stay open;
it sounds more appropriate to not set up a new socket.io connection for each request, but just once at your app startup and just re-use that;

Inserting data to MongoDB - no error, no insert

I'm currently trying to make a register form using mongoDB and nodeJS - I've created new database and collection - I want to store: username, password, email and insert_time in my database.
I've added unique indexes to username/email and checked if it works - and I can not add a duplicated entry using mongo's console or rockmongo (php mongodb manager) - so it works fine.
However - when the piece of code that is supposed to register a new account is being executed and makes an insert with the data that is already in database - it returns an object that contains all the data that was supposed to be added with a new, unique id. The point is - it should return an error that would say that entries can not be duplicated and insert failed - instead it returns the data back and gives it a new id. Data that already resides in database remains untouched - even the ID stays the same - it's not rewritten with the new one returned by script's insert.
So, the question is... what am I doing wrong? Or maybe everything is fine and database's insert should return data even if it's failed?...
I even tried defining indexes before executing indexes.
I tried inserting the data using mongoDB's default functions and mongoJS functions - the result is the same in both cases.
The code I'm trying to execute (for mongoJS):
var dbconn = require("mongojs").connect('127.0.0.1:27017/db', ['users']);
var register = function(everyone, params, callback)
{
// TODO: validation
dbconn.users.ensureIndex({username:1},{unique:true});
dbconn.users.ensureIndex({email:1},{unique:true});
dbconn.users.save(
{
username: params.username,
password: params.password,
email: params.email,
insert_time: Date.now()
},
function(error, saved)
{
if(error || !saved)
{
callback(false);
}
else
{
console.log(error);
console.log(saved);
callback(true);
}
});
}
For both cases - inserting new data and inserting duplicated data that doesn't modify database in any way - ERROR is null and SAVED is just a copy of data that is supposed to be inserted. Is there any way to check if insert was made or not - or do I have to check whether the data already exists in database or not manually before/after making an insert?
Mongo works that way. You should tell you want to get errors back, using the safe option when you issue the save command (as per default it uses the "fire and forget" method).
https://docs.mongodb.com/manual/reference/command/getLastError/
This looks to be basically the same problem as MongoDB unique index does not work -- but with the JavaScript API rather than Java. That is, saving without either specifying the "safe" flag or explicitly checking the last error value is unsafe--- the ID is generated client side and the command dispatched, but it might still fail (e.g. due to a unique index violation). You need to explicitly get the last error status.
http://www.mongodb.org/display/DOCS/dbshell+Reference#dbshellReference-ErrorChecking suggests db.getLastError() using the command shell, and I assume the node API is identical where they can possibly make it so.
Same problem solved, just forgot that its async insert and any nodes process.exit stops adding data
var lineReader = require('readline').createInterface({
input: require('fs').createReadStream('parse.txt')
});
lineReader.on('line', function (line) {
console.log(line)
db.insert(line,{w:1},function(e,d){ if (e) throw e })
});
// f this s, data are still adding when do close KUR WA MAC 2 days of my f life wasted
lineReader.on('close', function() {
// ----->>>>>> process.exit(); <<<<<----------
});

Resources