Grunt Build Stuck at 99% | svgmin:dist - node.js

I have my website on AngularJs, I recently performed clean up using TortoiseGit but after that grunt build is not working.
I tried to clone the same project to another machine but same issue grunt build is unable to complete the process
$ grunt build
Running "clean:dist" (clean) task
46 paths cleaned.
Running "wiredep:app" (wiredep) task
Running "wiredep:test" (wiredep) task
Running "useminPrepare:html" (useminPrepare) task
Configuration changed for concat, uglify, cssmin
Running "concurrent:dist" (concurrent) task
Running "copy:styles" (copy) task
Copied 3 files
Done.
Execution Time (2019-03-20 13:18:51 UTC+5:30)
loading tasks 15ms ██████████████████ 38%
copy:styles 25ms ██████████████████████████████ 63%
Total 40ms
Running "svgmin:dist" (svgmin) task
Total saved: 202.19 kB
Done.
Execution Time (2019-03-20 13:18:51 UTC+5:30)
svgmin:dist 2.5s █████████████████████████████████████████████████ 99%
Total 2.5s
Update
After grunt build -v stuck here
Total saved: 202.19 kB
Done.
Execution Time (2019-03-20 14:52:16 UTC+5:30)
loading tasks 20ms █ 1%
svgmin:dist 2.7s ███████████████████████████████████████████████ 99%
Total 2.7s

Related

Server process being killed on a Linux Digitalocean VM

I am trying to run a Next.js server on a DigitalOcean virtual machine. The server works, but when I run npm run start, the logs say Killed after ~1 minute.
Here is an example log of what happens:
joey#mydroplet:~/Server$ sudo node server
info - SWC minify release candidate enabled. https://nextjs.link/swcmin
event - compiled client and server successfully in 3.3s (196 modules)
wait - compiling...
event - compiled client and server successfully in 410 ms (196 modules)
> Ready on https://localhost:443
> Ready on http://localhost:8080
wait - compiling / (client and server)...
event - compiled client and server successfully in 1173 ms (261 modules)
Killed
joey#mydroplet:~/Server$
After some research, I came across a couple of threads which detail a server lacking enough memory/resources to continue the operation. I upgraded the memory from 512 mb to 1 gb, but this still happens.
Do I need to further upgrade the memory?
This is the plan that I am on:
It was the memory. Upgrading the memory of the server from 1 gb to 2 gb solved this problem.
This is the plan that worked for me:

jit-grunt: Plugin for the "watch" task not found

after executing >> "grunt server" i saw these errors i don't know what and where i need to attach or detach some code.
one more thing DeprecationWarning: while nowhere in the code i wrote node --debug and node --debug-brk.
$ grunt server
Running "server" task
>> The `server` task has been deprecated. Use `grunt serve` to start a server.
Running "serve" task
Running "clean:server" (clean) task
>> 0 paths cleaned.
Running "env:all" (env) task
Running "express:dev" (express) task
Starting background Express server
(node:5497) [DEP0062] DeprecationWarning: `node --debug` and `node --debug-brk` are invalid. Please use `node --inspect` or `node --inspect-brk` instead.
Stopping Express server
Running "wait" task
>> Waiting for server reload...
Done waiting!
jit-grunt: Plugin for the "watch" task not found.
If you have installed the plugin already, please setting the static mapping.
See https://github.com/shootaroo/jit-grunt#static-mappings
Warning: Task "watch" failed. Use --force to continue.
Aborted due to warnings.
Execution Time (2018-09-25 12:24:48 UTC+5:30)
loading tasks 124ms ▇▇▇▇ 7%
express:dev 106ms ▇▇▇ 6%
wait 1.5s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 85%
Total 1.8s
Solution
npm install grunt-contrib --save-dev
and add this line before the last line of grunt.js:
grunt.loadNpmTasks('grunt-contrib');

gulp trust-dev-cert without error, but access the page still failed

After I upgrade my Nodejs to v9.3.0 and NPM to 5.5.1, failed to access the gulp serve server. The browser show the unsafe TLS security error.
I have removed ~.gcb-serve-data/, and run the gulp trust-dev-cert without error, but I cannot find the cert in my trust certificates, I am working in Win10.
Following is the **gulp trust-dev-cert" output:
[18:30:14] Starting gulp
[18:30:14] Starting 'trust-dev-cert'...
[18:30:14] Starting subtask 'configure-sp-build-rig'...
[18:30:14] Finished subtask 'configure-sp-build-rig' after 7.48 ms
[18:30:14] Starting subtask 'trust-cert'...
[18:30:14] Finished subtask 'trust-cert' after 79 ms
[18:30:14] Finished 'trust-dev-cert' after 90 ms
[18:30:14] ==================[ Finished ]==================
gulp trust-dev-cert must be executed IN the project folder.

Freeswitch pauses on check_ip at boot on centos 7.1

During an investigation into a different problem (Inconsistent systemd startup of freeswitch) I discovered that both the latest freeswitch 1.6 and 1.7 paused for several minutes at a time (between 4 and 14) during boot up on centos 7.1. Whilst it was intermittent, it was as often as one time in 3 or 4.
Running this from the command line :
/usr/bin/freeswitch -nonat -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
caused the following output (note the time difference between the Add task 2 and the line after it) :
2015-10-23 15:40:14.160101 [INFO] switch_event.c:685 Activate Eventing Engine.
2015-10-23 15:40:14.170805 [WARNING] switch_event.c:656 Create additional event dispatch thread 0
2015-10-23 15:40:14.272850 [INFO] switch_core_sqldb.c:3381 Opening DB
2015-10-23 15:40:14.282317 [INFO] switch_core_sqldb.c:1693 CORE Starting SQL thread.
2015-10-23 15:40:14.285266 [NOTICE] switch_scheduler.c:183 Starting task thread
2015-10-23 15:40:14.293743 [DEBUG] switch_scheduler.c:249 Added task 1 heartbeat (core) to run at 1445611214
2015-10-23 15:40:14.293837 [DEBUG] switch_scheduler.c:249 Added task 2 check_ip (core) to run at 1445611214
2015-10-23 15:49:47.883158 [NOTICE] switch_core.c:1386 Created ip list rfc6598.auto default (deny)
When I ran it from 1.6 on centos6.7 using the same command line as above I got this - note the delay is a more reasonable 14 seconds :
2015-10-23 10:31:00.274533 [INFO] switch_event.c:685 Activate Eventing Engine.
2015-10-23 10:31:00.285807 [WARNING] switch_event.c:656 Create additional event dispatch thread 0
2015-10-23 10:31:00.434780 [INFO] switch_core_sqldb.c:3381 Opening DB
2015-10-23 10:31:00.465158 [INFO] switch_core_sqldb.c:1693 CORE Starting SQL thread.
2015-10-23 10:31:00.481306 [DEBUG] switch_scheduler.c:249 Added task 1 heartbeat (core) to run at 1445610660
2015-10-23 10:31:00.481446 [DEBUG] switch_scheduler.c:249 Added task 2 check_ip (core) to run at 1445610660
2015-10-23 10:31:00.481723 [NOTICE] switch_scheduler.c:183 Starting task thread
2015-10-23 10:31:14.286702 [NOTICE] switch_core.c:1386 Created ip list rfc6598.auto default (deny)
It's the same on FS 1.7 as well.
This suggests heavily that centos 7.1 & FS have an issue together. Can anyone help me diagnose further or shine some more light on this, please?
This all came to light as I tried to understand why FS would not accept the cli connection for several minutes after I thought it had booted up (using -nc from systemd service).
Thanks to the FS userlist and ultimately Anthony Minessale, the issue was to do with RNG entropy.
This is a good explanation -
https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged
Here are some extracts :
There are two general random devices on Linux: /dev/random and
/dev/urandom. The best randomness comes from /dev/random, since it's a
blocking device, and will wait until sufficient entropy is available
to continue providing output.
The key here is that it's a blocking device, so any program waiting for a random number from /dev/random will pause until sufficient entropy is available for a "safe" random number.
This is a headless server, so the usual sources of entropy such as mouse/keyboard activity (and many others) do not apply. Thus the delays,
The fix is this :
Based on the HAVEGE principle, and previously based on its associated
library, haveged allows generating randomness based on variations in
code execution time on a processor......(google the rest!)
Install like this :
yum install haveged
and start it up like this :
haveged -w 1024
making sure it restarts on reboot :
chkconfig haveged on
Hope this helps someone.

Grunt watch tasks seem to take a very long time

I'm running two simple tasks that run for <100ms each but when run under the watch command the two combined tasks are taking ~8 seconds in total (there seems to be an overhead of 3.5 seconds per task). I'm using it with live-reload for development and I'm finding it very frustrating. I tried setting spawn to false but this seemed to break it and none of the associated tasks were run.
Here's sample output from when a sass file is changed.
>> File "app/styles/main.scss" changed.
File "app/styles/main.css" created.
Done, without errors.
Elapsed time
loading tasks 4ms ▇▇▇▇▇ 9%
sass 1ms ▇▇ 2%
sass:dist 39ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 89%
Total 44ms
Completed in 3.862s at Mon Nov 18 2013 17:05:57 GMT+0000 (GMT) - Waiting...
OK
>> File "app/styles/main.css" changed.
Running "copy:styles" (copy) task
Copied 1 files
Done, without errors.
Elapsed time
loading tasks 4ms ▇▇▇▇▇▇▇▇▇▇▇▇ 24%
copy:styles 13ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 76%
Total 17ms
Completed in 3.704s at Mon Nov 18 2013 17:06:01 GMT+0000 (GMT) - Waiting...
OK
>> File ".tmp/styles/main.css" changed.
... Reload .tmp/styles/main.css ...
... Reload .tmp/styles/main.css ...
Completed in 0.000s at Mon Nov 18 2013 17:06:01 GMT+0000 (GMT) - Waiting...
Using grunt 0.4.1 (and grunt-cli 0.1.11) on node.js 0.10.20. Running on 2012 Macbook Air (OS X 10.8.5)
After file was changed, watch execute the tasks, but on finished, watch reload the Modules(!) and watched again.
Verbose to see the problem:
grunt tasknamewatch --verbose
I've tried a recursion on the watch task, but no success.
watch: {
...,
tasks: ['sometask', 'watch']
}
An easy solution that worked well, was to use "grunt-este-watch". You can read the required steps here: https://stackoverflow.com/a/33920834/2741005
Yeah, contrib-sass is a lot slower, thought that might have contributed to the problem. The only thing I could suggest is to minimise the amount of watch targets you are running; it looks like you are copying the css from app into tmp and then reloading that? Might be better to save your sass directly into tmp with something like a sass:dev task, that way you only run watch twice. This is how I usually do it:
watch: {
sass: {
files: [
'styles/**/*.scss'
],
tasks: ['sass', 'copy:dev', 'cssmin']
},
css: {
options: {
livereload: true
},
files: [
'dist/css/master.css'
],
tasks: []
}
}
I can't help but think that it is the extra overhead of running copy in a different target altogether, of course you can run as many tasks as you like in that tasks array. :)

Resources