Goal is:
Get comet message to work with overridden methods in controller like UPDATE or CREATE or DESTROY.
When i don't do override in controller SailsJS for UPDATE, i get comet message when i say socket.on('message').
Code is this:
#listening socket for new apps
socket.on "message", (data) ->
console.log data
if data.model is "application"
viewModel.apps.push(new App(data.data))
if data.model is "configuration"
#rewrite this
return
But this is never fired when i override method in my case UPDATE.
No matter what res i return never fires it up, I'm using res.json().
Thanks
Take a look at the blueprint hook for update, you have to handle the pub/sub of models for your custom routes.
https://github.com/balderdashy/sails/blob/master/lib/hooks/blueprints/actions/update.js
You need to call Model.publishUpdate and Model.subscribe
Related
I have a json-server running in a small node.js project.
I can PATCH and PUT the existing data and will GET the updated values in return later. So far so good.
But for some of these operations, I also need to broadcast an MQTT message with the updated object.
Of couse I could implement my own handlers all the way, like
server.put('/somepath', (req, res) => {
data.something = req.body
mqttClient.publish('somepath', JSON.stringify(data.something)
res.status(200).send()
})
But I'd like to take advantage of the built in logic of the json-server that automatically mutates my data when doing POST/PATCH/PUT/DELETE requests and still be able to broadcast the new document over MQTT after the mutation is done.
Is that possible to do in a smarter generic way instead of implementing a request handler for each single endpoint?
Thanks in advance for any tips :)
I'm trying to catch when a user leaves from my Meteor application (version 1.2.0.2) ; something equivalent to the SocketIO disconnect() on the server side.
The user could close his browser, go to another website or simply refresh the page and it would fire anyway
Surprisingly, i'm searching on Internet and everything is mixed up, nothing works properly. I thought Meteor was literally based on this magic-live processing so it must manage this event in a way or another.
Iron router documentation specify this :
onStop: Called when the route is stopped, typically right before a new
route is run.
I also found Router.load and Router.unload but none of them work. This is my current [not working] code which is quite simple
Router.configure
layoutTemplate: 'MasterLayout'
loadingTemplate: 'Loading'
notFoundTemplate: 'NotFound'
Router.onStop (->
console.log('Try to stop')
Users.insert({
name: "This is a test"
lat: 0
lng: 0
})
)
Am I doing something wrong here ? How do you catch this event in my app ?
You need to attach to the onStop of the route, not the router. For instance:
Router.route('/', {
onStop: function() {
console.log("someone left the '/' route");
}
});
Another option is to use the onStop event of subscriptions. That is probably the option most similar to the socketio disconnect you mentioned. You can find an example of that in the typhone source code.
There were two solution working, I found the 2nd and best one by searching in the API Documentation for a while.
First solution : working with subscribe & publish
Anywhere in the controller / front-end side you must subscribe to a collection
# in coffee
#subscribe('allTargets')
# in javascript
this.subscribe('allTargets')
Afterwards you just have to publish and add a onStop listener. This example will take a Targets collection I already defined somewhere before, it just gets all the entries.
# in coffee
Meteor.publish 'allTargets', ->
#onStop ->
# Do your stuff here
return Targets.find()
# in javascript
Meteor.publish('allTargets', function() {
this.onStop(function() {
// Do your stuff here
});
return Targets.find();
});
You have to be careful not to return Targets.find() before you set the onStop listener too. I don't think it's a perfect solution since you don't listen to the connection itself but the changes of a collection.
Second solution : working with DDP connection
I realized through the Meteor API Documentation we can directly listen to the connection and see if someone disconnect from the server-side.
To stay well-organized and clean within my Meteor Iron project I added a new file in app/server/connection.coffee and wrote this code
# in coffee
Meteor.onConnection (connection) ->
connection.onClose ->
# Do your stuff
# in javascript
Meteor.onConnection(function(connection) {
connection.onClose(function() {
// Do your stuff
});
});
You can manage datas with connection.id which's the unique identifier of your browser tab. Both solutions are working well for me.
If you use Meteor.userId through their accounts system, you can't use it outside a method in the server-side so I had to find a workaround with the connection.id.
If anyone has a better solution to manage connections while getting this kind of client datas, don't hesitate to give your input.
In my Sails project, I have a User model/controller and a Request model/controller, as well as a Dashboard controller. A user can make a request for data using RequestController.create, and an administrator can approve it using RequestController.grant.
What I want to do is to notify a user whenever one of his/her requests is approved (updated). In RequestController.grant, I call Request.publishUpdate(...), and in my DashboardController.display, I call
Request.find(req.session.user.id, function(err, requests) {
...
Request.subscribe(req, requests, ['update'])
...
});
Then, in the view /dashboard/display, I put in <script> tags:
<script>
// Socket handling for notifications
io.socket.on("request", function(obj) {
alert(obj.verb);
alert(obj.data);
alert(obj.previous);
});
</script>
However, upon approving a user's request and going to the dashboard, no alerts show up. The script for sails.io is already loaded, with no errors in the console. Am I doing something wrong?
The likely problem here is that you're using Request.subscribe in an HTTP call. I'm assuming that's the case since it seems like you're problem using DashboardController.display to display a view. In this case, the Request.subscribe doesn't do anything at all (it should really display a warning) because it can't possibly know which socket to subscribe!
If you'd like to keep your controller logic the same (you might be using the requests array as a view local to bootstrap some data on the page), that's fine; a quick refactor would be to test whether or not its a socket call in the action:
// Also note the criteria for the `find` query below--just sending an
// ID won't work for .find()!
Request.find({user: req.session.user.id}, function(err, requests) {
...
// If it's a socket request, subscribe and return success
if (req.isSocket) {
Request.subscribe(req, requests, ['update'])
return res.send(200);
}
// Otherwise continue on displaying the view
else {
...
return res.view('dashboard', {requests: requests});
}
});
Then somewhere in your view call io.socket.get('/dashboard/display') to subscribe to request instances.
You could also use blueprints to subscribe to the requests from the front end, doing something like io.socket.get('/request?user='+userId), if you put the user ID in the locals and add it somewhere in your view template, e.g. <script>var userId=<%=user.id%></script>.
In my node.js application i used to handle error by domain.
The architecture of the app looks like:
Controllers. Express routing calls controllers methods
Controllers call services and use models
Services call repositories
(Actually it's quite similar to DDD).
So, controllers create domain, and run its actual body in the domain. Services used to throw exceptions if something's going wrong. Also controllers listen for domain errors and process them. Its very comfortable because i dont need to worry about carring an error over all services's method callstack — i just throw an exception and can be sure that it would be caught in controller.
But i have faced a problem connected with using PostgreSQL.
I use node-postgres module and i create pg.Client in separated js file, so pg.Client is like shared for everybody (otherwise, creating pg.Client on each query makes open lots of active connections with postgres).
The problem is that when pg.Client is definded in separated file it's like a global object and it's not included in domain scope created in controllers. So exceptions throwed from pg.Client callbacks are not caught by domain.
I will show simplified way of request processing to make it clear. Let's say user wants to get login by userId:
Somewhere in the beggining pg.Client created
Get request comes to express, express call routing method
Routing calls some of controller's method
Controller creates domain and calls in «domain.run» a service
Service calls a repository
Repository takes pg.Client created ealier, and calls sql query method
Result of sql query method puts in callback (so this callback is called by pg.Client)
And then the callback is processed in service, where we check that we get null instead of user model (because there is no such user in db for given userId) and throw an exception
So that exception is not caught in domain.
Technically, we need to use «add» method of node domain and add pg.Client, but how i said pg.Client is shared, hence it would be added in different concurrent domains.
I will list some code example to make it more clear.
It's simplified method of UserService:
login: function (email) {
var userRepository = new UserRepository();
userRepository.findByEmail(email, function (model) {
if (model == null)
throw new Error('No such user');
});
}
So, that method «login» is called in domain. But it created userRepository, calls findByEmail method which will use shared pg.Client laying outside of domain's scope, and that's why exception will not be caught in domain.
Any ideas how to fix it and put pg.Client in the domain?
I solved the issue.
I create EventEmitter in scope of active domain and listen lets say «onCallback» event.
In callback of query method (which is not connected with the domain, because pg.Client is kinda global and lays out of the domain) i emit «onCallback» of the EventEmitter.
Im working on a socket.io based server/client connection instead of ajax.
Client uses Backbone and I overwritten the Backbone.sync function with one
half assed of my own:
Backbone.sync = function (method, collection, options) {
// use the window.io variable that was attached on init
var socket = window.io.connect('http://localhost:3000');
// emit the collection/model data with standard ajax method names and options
socket.emit(method,{collection:collection.name,url:collection.url});
// create a model in the collection for each frame coming in through that connection
socket.on(collection.url,function(socket_frame){
collection.create(socket_frame['model']);
})
};
Instead of ajax calls I simply emit through socket attached to window.io
global var. Server listens to those emits and based on the model url, I don't want to change that behaviour and I use the default crud method names (read,patch...) inside each emited frame. The logic behind it (its a bit far thought, but who knows) that in case the client doesn't support Websockets I can easily fallback to default jQuery ajax. I attached the orginal Backbone.sync to a var so I can pass the same arguments to it when no websocket is available.
All it that greatness behalves properly and the server answers to the client events. The server emits then each model data as a seperate websocket frames in one connection.
I see the frames in the Network/Websocket filter as one (concurrent/established) connection
and things seems to be working
Currently the function assumes I pass a collection and not a model.
Questions:
Is that approach ok with you?
How can I use the socket.io callbacks on 'success' and 'failure' etc in Backbone the right way so I don't have to call the collection.create function 'by-hand'?
Is it better to establish different concurrent connections for models/collections or use the one already established instead?