Grunt watch tasks seem to take a very long time - node.js

I'm running two simple tasks that run for <100ms each but when run under the watch command the two combined tasks are taking ~8 seconds in total (there seems to be an overhead of 3.5 seconds per task). I'm using it with live-reload for development and I'm finding it very frustrating. I tried setting spawn to false but this seemed to break it and none of the associated tasks were run.
Here's sample output from when a sass file is changed.
>> File "app/styles/main.scss" changed.
File "app/styles/main.css" created.
Done, without errors.
Elapsed time
loading tasks 4ms ▇▇▇▇▇ 9%
sass 1ms ▇▇ 2%
sass:dist 39ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 89%
Total 44ms
Completed in 3.862s at Mon Nov 18 2013 17:05:57 GMT+0000 (GMT) - Waiting...
OK
>> File "app/styles/main.css" changed.
Running "copy:styles" (copy) task
Copied 1 files
Done, without errors.
Elapsed time
loading tasks 4ms ▇▇▇▇▇▇▇▇▇▇▇▇ 24%
copy:styles 13ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 76%
Total 17ms
Completed in 3.704s at Mon Nov 18 2013 17:06:01 GMT+0000 (GMT) - Waiting...
OK
>> File ".tmp/styles/main.css" changed.
... Reload .tmp/styles/main.css ...
... Reload .tmp/styles/main.css ...
Completed in 0.000s at Mon Nov 18 2013 17:06:01 GMT+0000 (GMT) - Waiting...
Using grunt 0.4.1 (and grunt-cli 0.1.11) on node.js 0.10.20. Running on 2012 Macbook Air (OS X 10.8.5)

After file was changed, watch execute the tasks, but on finished, watch reload the Modules(!) and watched again.
Verbose to see the problem:
grunt tasknamewatch --verbose
I've tried a recursion on the watch task, but no success.
watch: {
...,
tasks: ['sometask', 'watch']
}
An easy solution that worked well, was to use "grunt-este-watch". You can read the required steps here: https://stackoverflow.com/a/33920834/2741005

Yeah, contrib-sass is a lot slower, thought that might have contributed to the problem. The only thing I could suggest is to minimise the amount of watch targets you are running; it looks like you are copying the css from app into tmp and then reloading that? Might be better to save your sass directly into tmp with something like a sass:dev task, that way you only run watch twice. This is how I usually do it:
watch: {
sass: {
files: [
'styles/**/*.scss'
],
tasks: ['sass', 'copy:dev', 'cssmin']
},
css: {
options: {
livereload: true
},
files: [
'dist/css/master.css'
],
tasks: []
}
}
I can't help but think that it is the extra overhead of running copy in a different target altogether, of course you can run as many tasks as you like in that tasks array. :)

Related

arangodb starter mode does not start

I have d/l'd arangodb3-linux-3.9.2 from GIT on Centos 7. I created a database dir and ran the README instructions for a standalone start. The first time it runs, I get 100 failures, the key INFO log lines seem to be
... [INFO] server started component=arangodb pid=49827 type=single
... [INFO] Wait on 49827 returned component=arangodb exit-status=1 trap-cause=-1
It creates the log file, setup.json and a single8529 dir in the database dir I sped'd. Is it just taking too long to start? The whole 100 fails take about 1 or 2 seconds.
If I try to run it again with the same README instructions, the next time I get this error:
... [FATAL] Failed to run service error="open /.../single8529/data/ENGINE: no such file"
I have also tried with --starter.host 127.0.0.1 -- to simplify
Also I and can confirm that port 8529 is open
I couldn't get arangodb 'starter' according to their README to work, but this does start the server:
arangod --database.directory MYDIR --rocksdb.max-background-jobs 4

Grunt Build Stuck at 99% | svgmin:dist

I have my website on AngularJs, I recently performed clean up using TortoiseGit but after that grunt build is not working.
I tried to clone the same project to another machine but same issue grunt build is unable to complete the process
$ grunt build
Running "clean:dist" (clean) task
46 paths cleaned.
Running "wiredep:app" (wiredep) task
Running "wiredep:test" (wiredep) task
Running "useminPrepare:html" (useminPrepare) task
Configuration changed for concat, uglify, cssmin
Running "concurrent:dist" (concurrent) task
Running "copy:styles" (copy) task
Copied 3 files
Done.
Execution Time (2019-03-20 13:18:51 UTC+5:30)
loading tasks 15ms ██████████████████ 38%
copy:styles 25ms ██████████████████████████████ 63%
Total 40ms
Running "svgmin:dist" (svgmin) task
Total saved: 202.19 kB
Done.
Execution Time (2019-03-20 13:18:51 UTC+5:30)
svgmin:dist 2.5s █████████████████████████████████████████████████ 99%
Total 2.5s
Update
After grunt build -v stuck here
Total saved: 202.19 kB
Done.
Execution Time (2019-03-20 14:52:16 UTC+5:30)
loading tasks 20ms █ 1%
svgmin:dist 2.7s ███████████████████████████████████████████████ 99%
Total 2.7s

Submitting first job to pacemaker

I followed this guide:
https://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/
I stayed with the Active/Passive DRBD file system sharing. I had to reboot my cluster and now I am getting the following error:
Current DC: rbx-1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Tue Nov 28 17:01:14 2017
Last change: Tue Nov 28 16:40:09 2017 by root via cibadmin on rbx-1
2 nodes configured
5 resources configured
Node rbx-2: UNCLEAN (offline)
Online: [ rbx-1 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started rbx-1
WebSite (ocf::heartbeat:apache): Stopped
Master/Slave Set: WebDataClone [WebData]
WebData (ocf::linbit:drbd): FAILED rbx-1 (blocked)
Stopped: [ rbx-2 ]
WebFS (ocf::heartbeat:Filesystem): Stopped
Failed Actions:
* WebData_stop_0 on rbx-1 'invalid parameter' (2): call=20, status=complete, exitreason='none',
last-rc-change='Tue Nov 28 16:27:58 2017', queued=0ms, exec=3ms
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Any ideas?
Also does anyone have any recommended guides for submitting jobs?
This post is relatively old at this point but I'll leave this here for others to find if they stumble upon the same issue.
This problem has to do with an issue with the DRBD integration script that pacemaker uses. If it's broken, missing, has incorrect permissions, etc. you can get an error like this. In CentOS 7 that script is located at /usr/lib/ocf/resource.d/drbd
Note: This is specifically for the guide mentioned by OP but may help you:
Section 7.1 has a big "IMPORTANT" block that talks about replacing the Pacemaker integration script due to a bug. If you use the command it tells you to there, you actually replace the script with a 404 Error page which obviously doesn't work, causing the error. You can fix this issue by replacing the script with the original, either by reinstalling DRBD...
yum remove -y kmod-drbd84 drbd84-utils
yum install -y kmod-drbd84 drbd84-utils
...or finding just the drbd script elsewhere and adding/replacing it to /usr/lib/ocf/resource.d/drbd. Make sure its permissions are correct and that it is set as executable.
Hope that helps!

unable to debug Jmeter through CLI

I'm trying to run Jmeter through command line on Centos VM like so:
./jmeter -n -t temp_cli/sampler.jmx -l temp_cli/results.xml -j temp_cli/j.log
I get :
INFO - jmeter.threads.JMeterThread: Thread is done: sampler flow 1-1
INFO - jmeter.threads.JMeterThread: Thread finished: sampler flow 1-1
DEBUG - jmeter.threads.ThreadGroup: Ending thread sampler 1-1
summary = 1 in 1s = 2.0/s Avg: 434 Min: 434 Max: 434 Err: 1 (100.00%)
Tidying up ... # Wed Apr 13 07:57:42 UTC 2016 (1460534262577)
... end of run
It supposed to take more than 1s so I'm pretty sure somthing went wrong. The thing is I don't get enough data about what went wrong.
I tried tail -f jmeter.log but I got no errors
Anyone knows how can I get more information?
Your file results.xml will give you more details.
You can see here that you got 100% error rate so your unique sample failed.
If you are running the test in non gui mode on a different machine from where you ran the gui mode, then you most probably did not install the plugin jars.

ZTE Open don't boot after flash to Firefox OS 1.2

I have ZTE Open with custom ROM Boot2Gecko 1.3.0.0-prerelease (Git 2013-10-19 22:09:07 d544afff51)
I'm building B2G v.1.2.
BRANCH=v1.2 ./config.sh inari
./build
Build finishes successful.
I'm flashing and see next:
./flash.sh
< waiting for device >
erasing 'cache'...
OKAY [ 0.530s]
finished. total time: 0.530s
erasing 'userdata'...
OKAY [ 1.405s]
finished. total time: 1.405s
sending 'userdata' (55044 KB)...
OKAY [ 5.074s]
writing 'userdata'...
OKAY [ 10.051s]
finished. total time: 15.125s
sending 'system' (81724 KB)...
OKAY [ 7.507s]
writing 'system'...
OKAY [ 14.973s]
finished. total time: 22.479s
rebooting...
finished. total time: 0.001s
Attempting to set the time on the device
time 1384896807 -> 1384896807.0
But my phone is frizzed on logoscreen
adb shell dmesg returns next: https://gist.github.com/blackbass1988/7559973
I'm building on MacosX 10.9
Strange, why build says that everything ok, but not ok
You used the adapted boot.img? Without this you will not be able to get a working system, just by using the build instructions on MDN. Here are some blogs that describe the build process:
https://blog.non.co.il/index.php/zte-open-phone-upgrading-to-firefoxos-1-1-how-to/
http://rowehl.com/blog/2013/10/24/firefoxos-1-dot-2-on-zte-open/
http://sl.edujose.org/2013/10/adapted-boot-image-for-use-with-b2g.html
Odd. Try reseting the cache from the system recovery.
ie : reboot, hold the power+volume up while it's restarting and you should get to the system recovery. move down to "wipe cache partition" w/ the volume down button and hit power button to confirm. Then reboot the device again and see if it starts up.

Resources