I'm building a an application using sails and every time I leave the server running for more than a few minutes my CPU jumps to a solid 100% usage. I'm including a big amount of less files in my assets and I believe my issue lies here. Are there any other reasons this may happen?
It could be the grunt-watch, when you have a lot of files it squeezes your cpu. Try disabling that and check if your cpu gets to a normal usage (6-30% depending on your cpu and overall usage).
To do that go to tasks/register/default.js and remove 'watch' from the array.
module.exports = function (grunt) {
grunt.registerTask('default', ['compileAssets', 'linkAssets', 'watch']);
};
If you don't want to completely disable the grunt watcher, then go to tasks/config/watch.js and try excluding the folder that has most of your files, or exclude them all if they are not in a particular folder.
I'll give you an example of how to exclude a folder for this task. Just add a ! before the path you want to exclude.
module.exports = function(grunt) {
grunt.config.set('watch', {
// Some config you can ignore in this case
assets: {
// Assets to watch:
files: ['assets/**/*',
'tasks/pipeline.js', '!**/node_modules/**',
'!assets/folder-to-exlude/**' // <-- HERE IS THE EXCLUDED PATH
],
// More code
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
};
I had a similar issue and this worked for me, let me know if it works.
Related
According to the docs one can increase the default async timeout from 5000ms using the jest-object
More specifically, by using the jestsettimeouttimeout
The issue I am facing is I am running a series of tests against an API that is very slow, 5-15 second response times, configuring this jest object at the top of each test is painfully annoying.
Is it possible to declare these settings once before all test files are run?
Jest offers a testTimeout configuration option you can add to your package.json:
"jest": {
"testTimeout": 15000,
}
OK, putting bits together:
Option "setupTestFrameworkScriptFile" was replaced by configuration "setupFilesAfterEnv", which supports multiple paths
https://jestjs.io/docs/en/jest-object#jestsettimeouttimeout
https://jestjs.io/docs/en/jest-object#jestdisableautomock
The Jest search box doesn't actually return anything when you search for: setupFilesAfterEnv
And docs talk about: setupTestFrameworkScriptFile (which also doesn't return anything on the search:/ )
Anyway, the docs leave you scratching your head but this works:
jest.config.js:
module.exports = {
setupFilesAfterEnv: ['./setup.js'],
setup.js:
jest.setTimeout(10000); // in milliseconds
The jest folks should make it easier to find this information.
Use testTimeout. In yourjest.config.js (or similar), add the following:
export SECONDS = 1000;
module.exports = {
testTimeout: 60 * SECONDS
}
If you are working with react and initializing you app using create-react-app, then under your src/ directory you should have a file named setupTests.js. Here you can setup a global timeout for all of your tests just by insert this line after the import statement for #testing-libary
jest.setTimeout(15000); // in milliseconds
when running a test locally it succeeds, but when configuring a remote grid, it fails with
1) Scenario: Login - features/api.feature:10
Step: When he enters his credentials - features/api.feature:13
Step Definition: node_modules/serenity-js/src/serenity-cucumber/webdriver_synchroniser.ts:46
Message:
function timed out after 5000 milliseconds
How can I increase the timeout value?
Thanks & Ciao
Stefan
Hi Stefan and thanks for giving Serenity/JS a try!
You have a couple of options here, depending on what is timing out.
As it's Protractor that's in charge of the timeouts, you'll need to look into your protractor.conf.js file.
Let's assume that your protractor.conf.js file looks more or less like the snippet below. I omit the Serenity/JS and Cucumber.js config for brevity as they're described at serenity-js.org:
exports.config = {
baseUrl: 'http://your.webapp.com',
// Serenity/JS config
framework: ...
specs: [ 'features/**/*.feature' ],
cucumberOpts: {
// ...
},
};
0. Increasing the overall timeout
To start with, you might want to increase the overall timeout of all the tests (for Protractor 5.0.0 the default value is set to 11s).
To do this, add the allScriptsTimeout entry to your config:
exports.config = {
allScriptsTimeout: <appropriate_timeout_in_millis>
// ... rest of the config file
}
1. Loading the page
If the webapp under test is slow to load, you can tweak the getPageTimeout property (default set to 10s):
exports.config = {
getPageTimeout: <appropriate_timeout_in_millis>
// ... rest of the config file
}
2. A specific Cucumber step
If a specific Cucumber step is timing out (which is most likely the case here, as Cucumber.js sets the default value of the cucumber step timeout to 5s), you can increase the timeout by changing the step definition (value in millis):
this.Given(/^When he enters his credentials$/, { timeout: 10 * 1000 }, () => {
return stage.theActorInTheSpotlight().attemptsTo(
Login.withTheirCredentials()
);
});
Please note that in the above answer I'm assuming that you're using Serenity/JS with Cucumber to test an Angular app. If you're using a different web framework (like React), the test might also time out when Protractor is waiting for Angular to load.
If this describes your scenario, you might want to ignoreSynchronization:
exports.config = {
onPrepare: function() {
browser.ignoreSynchronization = false;
}
// ... rest of the config file
}
To find out more, check out the Protractor documentation and the already mentioned Cucumber docs. I'll also add an article on serenity-js.org shortly to describe the different options so everything is in one place :-)
Hope this helps!
Jan
I am having issues with the performance of browser livereloading whenever I make a change to a js file. In my gulp setup, I have the following watches:
gulp.task('watch', function() {
gulp.watch(config.paths.html, ['html']);
gulp.watch(config.paths.js, ['js', 'lint']);
gulp.watch(config.paths.css, ['css']);
});
Thus, whenever there's a change to a js file, the js and lint tasks are triggered. They are as follows:
gulp.task('js', function() {
browserify(config.paths.mainJs)
.transform(reactify)
.bundle()
.on('error', console.error.bind(console))
.pipe(source('bundle.js'))
.pipe(gulp.dest(config.paths.dest + '/scripts'))
.pipe(connect.reload());
});
gulp.task('lint', function() {
return gulp.src(config.paths.js)
.pipe(lint({config: 'eslint.config.json'}))
.pipe(lint.format());
});
With this setup, livereloading on js file changes doesn't scale. As the project gets bigger, the live reloads take longer. Even a small project with about a dozen JS files already takes 3.8 seconds per reload.
I know the problem is that on every js file change, you're reactifying and bundling every js file in the project, which is an expensive operation and completely redundant for all the js files other than the one you changed. What's a better way to handle the live reloading? I know webpack uses hot module reloading, is there a gulp equivalent for this?
I'm trying to ignore specific files within folders using Chokidar. I'm sure the syntax for the ignore path is incorrect, but I can't seem to find the problem. I've tried all combinations of strings, globs, and arrays. I'd appreciate if someone would point me in the right direction.
Here's a quick example of the problem. I'm trying to ignore ignore.js, but since the folder is being watched, console.log is executed when both writing and deleting the file.
var chokidar = require('chokidar');
var fs = require('fs');
var path = require('path');
var watcher = chokidar.watch('./test', {
ignored: path.resolve('./test/ignore.js'),
persistent: true,
ignoreInitial: true,
alwaysState: true
});
watcher.on('all',console.log);
setTimeout(function(){fs.writeFile('./test/ignore.js', 'w');}, 200);
setTimeout(function(){fs.unlink('./test/ignore.js');}, 300);
Thanks for any help!
I am inclined to agree with #loganfsmyth's comment that your path name is wrong. In my app I dynamically lookup the folder chokidar is monitoring from a function. For instance
Meteor.methods({
getWatchFolder: function () {
return watchFolder;
},
});
I set watchFolder elsewhere, it's not really important in the context of this question, but assume it is returning "/tmp". This worked great, and ignored that file:
var watcher = chokidar.watch(Meteor.call('getWatchFolder'), {
ignored: path.resolve(Meteor.call('getWatchFolder')+'/ignore.js'),
persistent: true
});
I noticed this only ignored /tmp/ignore.js, not a nested instance like /tmp/tmp2/ignore.js. If you want to ignore all nested instances this is easily remedied by adding the double asterisk wildcard to the ignore path:
var watcher = chokidar.watch(Meteor.call('getWatchFolder'), {
ignored: path.resolve(Meteor.call('getWatchFolder')+'/**/ignore.js'),
persistent: true
});
I tried setting my watch folder to . like you. It found TONS of files, I determined it was running from
/Users/esoyke/myAppName/.meteor/local/build/programs/server
This was not respecting my ignore path though. When I changed it from watching . to that absolute path instead, it naturally found the same files but the ignore worked again. I suspect there is an issue with absolute vs. relative paths going on here, see if you can refactor to use an absolute path.
P.S. Thanks for showing me that path.resolve, I hadn't used that Node module yet. I was trying to add multiple sub-directories to the ignore and was trying to do so by editing chokidar's default regex of /[/\]./. This approach is much simpler and easier to read. Sadly it doesn't seem like chokidar's library allows multiple ignore values at the moment, and path.resolve returns an array when there are more than one arg to it, so I'll probably have to go back to regex to get multiple ignore paths working.
I am working on more of a security dashboard, it watches for changes in files in the entire home directory with hundreds of sites (all Joomla, so a lot of files).
In order to keep on top of potential security issues we want to watch for file changes in an efficient way without creating unnecessary CPU/Memory overhead. We want to watch it at a faster interval but I know its more of a balancing act when you do want to keep it from using more cpu then a side process should.
I have tried to use "watch" with the following code, running in the home directory:
var watch, fs;
watch = require('watch');
fs = require('fs');
watch.createMonitor(__dirname,{interval:500,filter:function(file,stat){
if(file.indexOf('index.php')!=-1){
return true;
}else{
return false;
}
}},function(monitor){
monitor.filter(function(file){
console.log(file);
})
monitor.on('created',function(file,stat){
console.log(file + ' new');
});
monitor.on('changed',function(file,stat){
console.log(file + ' changed');
});
monitor.on('removed',function(file,stat){
console.log(file + ' deleted');
});
});
However this spikes the CPU to over 100% of a single core (sometimes 2) out of 8. Memory also takes up about 20% of 8gb pretty quickly as well. This is all just to create the watch event on all the files, so its before it can actually detect any file changes.
I know the issue with this is it goes through each file individually, and only does not track it if you filter that sort of file. Typically all I need to watch is the index.php in every directory, down to a point that it could be consistent (with some exceptions).
Is there a module already built to do this? Or is this something new? All modules I find assume its a smaller directory (like watching LESS or something) So not built for this sort of application at all.
Any ideas? I know this code will need to be scrapped as there is no way I can see to stop the CPU overhead.
Do not use package 'watch', just use fs.watch(...)
package 'watch':
consistent APIs
very slow because implement mostly in node, look source to see how it work
souce code: https://github.com/mikeal/watch/blob/master/main.js
fs.watch(..)
non-consistent APIs, not all OSs are supported.
very fast because it reused OS features
document: http://nodejs.org/docs/latest/api/fs.html#fs_fs_watch_filename_options_listener