Sencha Touch 2.1 Store Load Memory Leak - memory-leaks

I am having a problem with loading a store and leaking memory. I have a store that I need to load every 5 seconds. I am using DelayedTask to perform the polling. This app requires the polling and will run for long periods of time. The store pulls back a fairly large JSON dataset and after a couple hours has hit 500MB. I perform the polling in a controller.
I have stripped out all logic down to just loading the store. Regardless if I use DelayedTask or setInterval, the leak is the same. I've tracked it down to the store.load logic. Atleast I think I have. :)
I also removed the callback from store load and performed the task.delay in the load event listener. The leak still persists.
So, I do not know if I am doing this wrong and introducing closures or is this a bug?
I also used Ext.Ajax to pull the data every 5 seconds. The memory leak is still there, but it is much smaller.
Any help appreciated!
Model:
Ext.define('fimobile.model.myModel', {
extend: 'Ext.data.Model',
config: {
fields: [
{name: 'a', type: 'string'},
{name: 'b', type: 'string'},
{name: 'c', type: 'string'},
{name: 'd', type: 'string'},
{name: 'e', type: 'string'},
{name: 'f', type: 'string'},
{name: 'g', type: 'string'},
{name: 'h', type: 'string'},
{name: 'i', type: 'string'},
{name: 'j', type: 'string'}
]
}
});
Store:
Ext.define('fimobile.store.myStore', {
extend: 'Ext.data.Store',
config: {
storeId: 'myStoreID',
model: 'app.model.myModel',
proxy: {
type: 'ajax',
url : url,
reader: {
type: 'json',
rootProperty: 'data',
successProperty: 'success'
}
},
autoLoad: true
}
});
Controller:
Ext.define('fimobile.controller.myController', {
extend: 'Ext.app.Controller',
config: {
views: ['myView'],
models: ['myModel'],
stores: ['myStore'],
refs: {
},
control: {
'myView': {
initialize: this.start
}
}
},
start: function () {
task = Ext.create('Ext.util.DelayedTask', function() {
this.getData();
}, this);
task.delay(5000);
},
getData: function() {
Ext.getStore('myStore').load({
scope: this,
callback : function(records, operation, success) {
console.log('callback');
task.delay(5000);
}
});
}
});

Why don't you post this on the bugs Sencha forum? would be a more sensible place to test a bug?

Related

Nodejs Create BinaryRow Data Structure with Raw Data with JSON string saved in Redis Client?

I am reading off from a Redis cache which has the user data with 'JSON string',
However in the Nodejs application, it sends the data as in the below format,
I want to create the exact same structure from the Redis JSON string, I am struggling to understand how to re-create the 'BinaryRow' in here.
if I do
util.inspect(user_info)
the final output needs to be like below.
user: [
BinaryRow {
user_id: 7558073,
country_id: 191,
city_id: 1975002,
name: 'iphone',
birth: 1980-09-25T18:30:00.000Z,
mode: 'active',
gender: 'M'
}
],
country: [ BinaryRow { country_title: 'Australia' } ],
city: [ BinaryRow { city_title: 'Gampahas' } ],
photo: [
BinaryRow {
photo_id: 100813,
visible: 'Y',
ref: 'ssss'
}
]

Gremlin NodeJS query returns different results than Groovy

I have a query that I originally wrote in the console:
g.V().hasLabel('group')
.has('type', 'PowerUsers')
.local(__.union(
__.project('group').by(__.valueMap().by(__.unfold())),
__.inE().outV().project('user').by(__.valueMap().by(__.unfold())))
.fold()).unfold().toList()
I get something like:
==>{group={owner=A, group_id=21651399-91fd-4da4-8608-1bd30447e773, name=Group 8, type=PowerUsers}}
==>{user={name=John, user_id=91f5e306-77f1-4aa1-b9d0-23136f57142d}}
==>{user={name=Jane, user_id=7f133d0d-47f3-479d-b6e7-5191bea52459}}
==>{group={owner=A, group_id=ef8c81f7-7066-49b2-9a03-bad731676a8c, name=Group B, type=PowerUsers}}
==>{user={name=Max, user_id=acf6abb8-08b3-4fc6-a4cb-f34ff523d628}}
==>{group={owner=A, group_id=07dff798-d6db-4765-8d74-0c7be66bec05, name=Group C, type=PowerUsers}}
==>{user={name=John, user_id=91f5e306-77f1-4aa1-b9d0-23136f57142d}}
==>{user={name=Max, user_id=acf6abb8-08b3-4fc6-a4cb-f34ff523d628}}
When I run that query with NodeJS, I was expecting to get a similar result, but I don't. I get something like this:
[ { group:
{ owner: 'A',
group_id: '21651399-91fd-4da4-8608-1bd30447e773',
name: 'Group 8',
type: 'PowerUsers' } },
{ user:
{ name: 'John',
user_id: '91f5e306-77f1-4aa1-b9d0-23136f57142d'} },
{ user:
{ name: 'John',
user_id: '91f5e306-77f1-4aa1-b9d0-23136f57142d'} },
{ user:
{ name: 'Jane',
user_id: '7f133d0d-47f3-479d-b6e7-5191bea52459'} },
{ user:
{ name: 'Jane',
user_id: '7f133d0d-47f3-479d-b6e7-5191bea52459'} },
{ group:
{ owner: 'A',
group_id: 'ef8c81f7-7066-49b2-9a03-bad731676a8c',
name: 'Group B',
type: 'PowerUsers' } },
{ user:
{ name: 'Max',
user_id: 'acf6abb8-08b3-4fc6-a4cb-f34ff523d628' } },
...
Because I have the same users in different groups, I can't use dedup(), and if the results where the same in NodeJS as Groovy, that'd be perfect. Unfortunately, they are not, and I don't understand why the results in NodeJS are all messed up, considering that the query is exactly the same
I feel like you could just use a nested project() to keep your group with your users:
g.V().hasLabel('group')
.has('type', 'PowerUsers')
.project('group', 'users')
.by(__.valueMap().by(__.unfold()))
.by(__.in().project('user').by(__.valueMap().by(__.unfold()).fold()))
In this way the you don't have to worry about ordering anything. I think it might be more Gremlin-esque though to use the "group" as a key in a Map with the values being a list of users:
g.V().hasLabel('group')
.has('type', 'PowerUsers')
.group()
.by(__.valueMap().by(__.unfold()))
.by(__.in().project('user').by(__.valueMap().by(__.unfold()).fold()))

Elasticsearch english language analyzer with nodejs

First time trying to implement elastic search using aws hosted service with nodejs. Referred to the official docs and came up with this:
//1. create index
client.indices.create({ index: 'products' });
//2. mapping
client.indices.putMapping({
index: 'products',
type: 'products',
body: {
properties: {
'product_name': {
'type': 'string',
"analyzer": "english"
},
'product_description': {
'type': 'string',
"analyzer": "english"
}
}
}
});
//3. import documents..
product_name: Testing
//4. search
client.search({
index: 'products',
type: 'products',
q: keywords,
analyzer: 'english',
size: 10
});
Now, if I search for 'Testing' or 'Token Testing', this returns 1 result. But if I test passing 'Test' or 'Tist' as a keyword, seems the analyzer isn't picking up and I get no results.
Updated: Added analyzer: 'english' according to #Val answer, but still no results for 'Test' or 'Tist'.
That's because when using the q parameter, a query_string query is made and the default analyzer of your input string is the standard analyzer.
You have two choices here:
A. Include the field name in your query string:
client.search({
index: 'products',
type: 'products',
q: 'product_name:' + keywords, <--- modify this
size: 10
});
B. Specify english as the query-time analyzer to produce the same tokens as has been previously indexed:
client.search({
index: 'products',
type: 'products',
q: keywords,
analyzer: 'english', <--- or add this
size: 10
});

Making MongoDB Update/Write Queries Faster

I am looking for the most efficient way to store realtime data in MongoDB for a MeanJS web application I am working on.
I have the following example schema:
SomeModel: {
name: {
type: String,
default: '',
required: 'Please Enter Name',
trim: true
},
data: {
type: Schema.Types.Mixed,
default: {}
},
data_keys: {
type: Schema.Types.Mixed,
default: {}
},
websocket_url: {
type: String,
default: '',
},
created: {
type: Date,
default: Date.now
},
user: {
type: Schema.ObjectId,
ref: 'User'
}
}
The 'data' field may have data like this, but it depends on the 'model' being subscribed to, each model's data may have a slightly different format.
data: {
balance: {
currentBalance: 100,
availableBalance: 80,
/* Additional Account Details */
},
orders: [{
/* Some Array of Order Details */
}],
/* Additional Data Properties */
}
For each 'someModel' object I am trying to connect to a websocket server, subscribe to updates and then write them to the database.
I am trying to use something like this:
some_ws = new WebSocket(someModel.websocket_url);
some_ws.on('message', function incoming(msg) {
var message = JSON.parse(msg);
try {
// Update 'someModel.data' in memory.
Object.keys(message['data']).forEach(function(key) {
someModel.data[key] = message['data'][key];
});
// Write out to Database.
SomeModel
.update({_id: someModel._id}, {data: someModel.data, data_keys: someModel.data_keys})
.exec(function (err, nItems) {
if(err) {
console.log("ERROR Saving SomeModel Data: %s", err);
} else {
// console.log("Saved Data for: %s", someModel.name);
}
});
} catch (exception) {
console.log(clc.red("Exception Caught: %s"), util.inspect(exception));
console.log(clc.cyan("DEBUG:: Message: %s"), util.inspect(message));
}
});
I'm finding I'm getting almost continuous updates from the websocket connection and that the 'update' queries are slowing down the 'read queries' that need to happen in the front end of the application.
I'd like to be able to store the 'current' data for the model in this 'someModel.data' object, and then every minute write to a 'model_log' table with a "snapshot" of what data associated with the model at that particular time:
eg:
model_log schema: {
model: {
type: 'Schema.ObjectId',
ref: 'SomeModel',
},
data: {
/* Model Data */
},
timestamp: {
type: Date,
default: Date.now,
}
}
so I can do: model_log.find({'timestamp': { $gte: startDate, $lte: endDate } });
and get back:
[
{
model: ObjectId('someModelId'),
data: {
someData: someValues,
otherData: otherValues,
},
timestamp: March 15, ‎2016‎ ‎12:‎00‎ ‎AM
},
{
model: ObjectId('someModelId'),
data: {
someData: someNewValues,
otherData: otherNewValues,
},
timestamp: March 15, 2016 12:01 AM,
},
...
]
How can I make this more efficient or make these write/update operations faster?
Thanks,
there are few options:
split load by using collection sharding
create replica set where secondary server can reply for queries and primary is responsible to serve data and push changes
using wired tiger storage engine put collection into memory (I'am unsure if we can do it in community)
use SSD HDD to reduce write latency
switch to wired tiger (as we have here document lock level instead of collection lock level)
nr 3 & 5 can be tested in separation on dev machine

SailsJs nothing happen after calling save method

Sorry, this is my first nodeJs app, I'm using SailsJs and this is my code
update: function (req, res) {
Cargo.findOne({id: req.param('id')}).exec(function (err,result) {
result.currency = 2;
if(err) {
console.log(err);
}
result.save(function(err,model){
console.log(err);
console.log(model);
console.log('in');
});
console.log("testing");
})
},
I keep getting "testing" in my console log and not getting any log in save method.
Am I doing it wrong?
Found the issue causing it not going into .save method
My model code are as follow
module.exports = {
tableName: "cargo",
attributes: {
'remark': {
type: 'text'
},
'shipper': {
type: 'integer',
integer: true,
model: 'Company',
columnName:"shipperID",
},
'consignee': {
type: 'integer',
integer: true,
columnName:"consigneeID",
model: 'Company'
},
'packing': {
"type": 'integer',
integer: true,
columnName:"packingID",
equals: function (value) {
return value;
}
},
'commodity': {
'type': "string",
maxLength: 512
},
'status': {
'type': "string",
maxLength: 25
},
'dateIn': {
'type': "datetime"
},
'currency': {
'type': 'integer'
},
'amt': {
'type': 'float'
},
'marking': {
'type': 'string',
maxLength: 25,
required: true
},
'createdAt': {
columnName: 'createdDate'
},
'updatedAt': {
columnName: 'lastModifiedDate'
},
'packages': {
collection: 'CargoPackage',
via: 'cargo'
}
}
};
It's due to "packing" attribute having this "equal" that cause the save method to goes into infinite loop or something. I thought this is a validator for making sure the value is what I want. Well I think its built for other purposes.
I guess for others with similar issues, it will be best to have the model you are working on be as basic as possible to make sure it's not that thing holding up the process.

Resources