SCons super slow startup in windows - scons

I have since long suffered from long startup times when building with SCons. On my old work laptop, it could take up to 60 seconds to just build the most fundamental hello world-example.
I just received a new laptop, so I had the opportunity to investigate this further. Our laptops come preloaded with Visual Studio 2010 and some other stuff. I also need Visual Studio 2015.
On the newly unpacked PC, a build of hello world took "only" 10 seconds (python 2.7.14, scons 3.0.0, no other major applications running)
After installing VS2015, the time went up to 20 seconds.
I can compare this with my 10 year old PC at home, where the same build takes less than 2 seconds (however, only VS2015 there).
What could be the reason for this extreme slowness? Can something be done? It seems like execution of the vcvars scripts and so are responsible. But why so slow on my work computers and not at home? How can I troubleshoot this further?
Ordered by: cumulative time
List reduced from 1104 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 20.500 20.500 C:\Python27\scons-3.0.0\SCons\Script\Main.py:831(_main)
1 0.000 0.000 20.324 20.324 C:\Python27\scons-3.0.0\SCons\Script\SConscript.py:155(_SConscript)
1 0.000 0.000 20.323 20.323 C:\Temp\SConstruct:1(<module>)
3/2 0.000 0.000 20.321 10.161 C:\Python27\scons-3.0.0\SCons\Environment.py:897(__init__)
3/2 0.000 0.000 20.314 10.157 C:\Python27\scons-3.0.0\SCons\Environment.py:93(apply_tools)
2 0.000 0.000 20.314 10.157 C:\Python27\scons-3.0.0\SCons\Environment.py:1782(Tool)
28/2 0.000 0.000 20.313 10.157 C:\Python27\scons-3.0.0\SCons\Tool\__init__.py:271(__call__)
2 0.000 0.000 20.313 10.157 C:\Python27\scons-3.0.0\SCons\Tool\default.py:38(generate)
2 0.000 0.000 20.150 10.075 C:\Python27\scons-3.0.0\SCons\Tool\mslink.py:256(generate)
8 0.000 0.000 20.150 2.519 C:\Python27\scons-3.0.0\SCons\Tool\MSCommon\vc.py:432(msvc_setup_env_once)
2 0.000 0.000 20.150 10.075 C:\Python27\scons-3.0.0\SCons\Tool\MSCommon\vc.py:531(msvc_setup_env)
2 0.000 0.000 20.149 10.074 C:\Python27\scons-3.0.0\SCons\Tool\MSCommon\vc.py:442(msvc_find_valid_batch_script)
2 0.000 0.000 20.148 10.074 C:\Python27\scons-3.0.0\SCons\Tool\MSCommon\vc.py:381(script_env)
1 0.000 0.000 20.147 20.147 C:\Python27\scons-3.0.0\SCons\Tool\MSCommon\common.py:144(get_output)
12 20.134 1.678 20.134 1.678 {method 'read' of 'file' objects}
1 0.000 0.000 0.173 0.173 C:\Python27\scons-3.0.0\SCons\Script\Main.py:1109(_build_targets)
1 0.000 0.000 0.172 0.172 C:\Python27\scons-3.0.0\SCons\Job.py:100(run)
1 0.000 0.000 0.169 0.169 C:\Python27\scons-3.0.0\SCons\Job.py:186(start)
3 0.000 0.000 0.156 0.052 C:\Python27\scons-3.0.0\SCons\Action.py:644(__call__)
2 0.000 0.000 0.155 0.078 C:\Python27\scons-3.0.0\SCons\Script\Main.py:184(execute)
My SConstruct file:
env = Environment()
hello = Program(["hello.c"])

The solution here, according to the chat, was to disable the Antivirus software running on the machine.
With this, the SCons startup time went from 40 to 2 seconds. Also significant performance improvement in other areas could be noticed.

Related

The host memory displayed on the CDH is inconsistent with that queried with the top command

When I was about to clean up the memory of the Linux host, I used the top command to check the memory usage, and found that the result of the query was inconsistent with the host memory displayed by CDH.
and i don't know why and how do CDH get the memory of host
CDH version is: 6.3.2(pracel)
Tasks: 659 total, 1 running, 655 sleeping, 2 stopped, 1 zombie
%Cpu(s): 9.7 us, 2.0 sy, 0.2 ni, 87.9 id, 0.1 wa, 0.0 hi, 0.1 si, 0.0 st
GiB Mem : 125.2 total, 4.9 free, 84.3 used, 36.0 buff/cache
GiB Swap: 34.0 total, 24.4 free, 9.6 used. 28.5 avail Mem
the cdh display
96.9 GiB / 125.2 GiB

WHM Server receiving lots of "FAILED: cphulk"

I have a WHM server on GoDaddy.
I'm receiving quite a lot (3-4 a day) mails about a process failing and recovering itself. Happens mostly to "cphulkd" but also to "lfd".
My server:
WHM version v68.0.33. Contains two websites (One Moodle and one Wordpress). 2GB Ram, 60GB HD.
This is the whole mail:
Server s50-62-22-123.secureserver.net Primary IP Address
50.62.22.123 Service Name cphulkd Service Status failed ⛔ Notification The service “cphulkd” appears to be down. Service Check
Method The system’s command to check or to restart this service
failed. Number of Restart Attempts 1 Service Check Raw Output (XID
ejd2e7) The “cphulkd” service is down.
The subprocess “/usr/local/cpanel/scripts/restartsrv_cphulkd” reported
error number 255 when it ended. Startup Log Starting cPHulkd...
Started. Starting PID 3789: cPhulkd - processor - dormant mode -
accepting connections Memory Information Used 2.43 GB Available
1.57 GB Installed 4 GB Load Information 0.17 0.19 0.18 Uptime 2 days, 18 hours, 59 minutes, and 37 seconds IOStat Information
avg-cpu: %user %nice %system %iowait %steal %idle
0.62 0.11 0.12 0.17 0.00 98.99 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn Top Processes
PID Owner CPU % Memory % Command 18850 root 2.45 2.29 spamd
child 3452 root 0.94 2.35
/usr/local/cpanel/3rdparty/perl/524/bin/perl -T -w
/usr/local/cpanel/3rdparty/bin/spamd --max-spare=1 --max-children=3
--allowed-ips=127.0.0.1,::1 --pidfile=/var/run/spamd.pid --listen=5 1488 mysql 0.52 7.49 /usr/sbin/mysqld --basedir=/usr
--datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=s50-62-22-179.secureserver.net.err --open-files-limit=10000 --pid-file=/var/lib/mysql/s50-62-22-179.secureserver.net.pid 18854 dovecot 0.31 0.06 dovecot/auth 20291 root 0.07 0.71 lfd -
sleeping
Any ideas?
What's weird is that the mail says I have 4GB but I only have 2GB..

ubuntu 14.04.1 server idle load average 1.00

Scratching my head here. Hoping someone can help me troubleshoot.
I have a Dell PowerEdge SC1435 server which had been running with a previous version of ubuntu for a while. (I believe it was 13.10 server x64)
I recently reformatted the drive (SSD) and installed ubuntu server 14.04.1 x64.
All seemed fine through the install but the machine hung on first boot at the end of the kernel output, just before I would expect the screen to clear and a logon prompt appear. There were no obvious errors at the end of the kernel output that I saw. (There was a message about "not using cpu thermal sensor that is unreliable" but that appears to be there regardless of whether it boots or not)
I gave it a good 5 minutes and then forced a reboot. To my surprise it booted to the logon prompt in about 1-2 seconds after bios post. I rebooted again and it seemed to pause for a few extra seconds where it hung before, but proceeded to the login screen. Rebooting again it was fast again. So at this point I thought it was just one of those random one-off glitches that I would never explain so I moved on.
I installed a few packages (exact same packages installed on the same OS version on other hardware), did apt upgrade and dist-upgrade then rebooted. It seemed to hang again so I drove to the datacentre and connected a console only to get a blank screen. Forced reboot again. (also setup ipmi for remote rebooting and got rid of the grub recordfail so it would not wait for me to press enter!)
That was very late last night. I came home, did a few reboots with no issue so went to bed.
Today I did a reboot again to check it and again it crashed somewhere. I remotely force rebooted it.
As this point I started digging a little more and immediately noticed something really strange.
top - 14:18:35 up 8 min, 1 user, load average: 1.00, 0.85, 0.45
Tasks: 148 total, 1 running, 147 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.3 sy, 0.0 ni, 99.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 33013620 total, 338928 used, 32674692 free, 9740 buffers
KiB Swap: 3906556 total, 0 used, 3906556 free. 47780 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 33508 2772 1404 S 0.0 0.0 0:03.82 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/u16:0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.24 rcu_sched
9 root 20 0 0 0 0 S 0.0 0.0 0:00.02 rcuos/0
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/1
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/2
This server is completely unused and idle, yet it has a 1 minute load average of exactly 1.00?
As I watch the other values - the 5 minute and 15 minute also appear to be heading towards 1.00 so I assume they will all reach 1.00 at some point. (The "1 Running" is the top process)
I have never had this before and since I have no idea what is causing the startup crashing, I am assuming at this point that the two are likely related.
What I would like to do is identify (and hopefully eliminate) what is causing that false load average and my crashing issue.
So far I have been unable to identify what process could be waiting for a resource of some kind to generate that load average.
I would very much appreciate it if someone could help me to try and track it down.
top shows all processes pretty much always sleeping. Some occasionally popping up top but I think that's pretty normal. CPU usage is mostly showing 100% IDLE, with very occasional dips to 99% or so.
nmon doesn't show me much. everything just looks idle.
iotop shows pretty much no traffic whatsoever. (again, very occasional spots of disk access)
interrupt frequency seems low. way below 100/sec from what I can see.
I saw numerous google discussions suggesting this:
echo 100 > /sys/module/ipmi_si/parameters/kipmid_max_busy_us
..no effect.
RAM in the server is ECC and test passes.
Server install was 'minimal' (F4 option) with OpenSSH server ticked during install.
Installed a few packages afterwards including vim, bcache-tools, bridge-utils, qemu, software-properties-common, open-iscsi, qemu-kvm, cpu-checker, socat, ntp and nodejs. (Think that is about it)
I have tried disabling and removing the bcache kernel module. no effect.
stopped iscsi service.. no effect. (although there is absolutely nothing configured on this server yet)
I will leave it there before this gets insanely long. If anyone could help me try to figure this out it would be very much appreciated.
Cheers,
James
the load average of 1.0 is an artefact of bcache write-back thread staying in uninterruptible sleep. It may be corrected in 3.19 kernels or newer. See this Debian bug report for instance.

LAMP on CenOS 6 sporadic timeouts

We have a few servers recently moved to new provider (well-known, Germany one).
Configuration are same, those are i7-2600 CPU, 16Gb RAM machines, 1Gbit cards (conneted to router at 100Mbit)
OS is Centos 6, Application is LAMP (Apache 2.2.15, PHP 5.3.8 (APC 3.1.9), MySQL 5.5.18, Memcached daemons running on each machine)
PHP pages called by proxy-component written in Java (100-300 times/sec depends on users number)
There is no any swapping on servers, no warnings in /var/log/messages, Load average is about 0.5-1.0 on application servers
and 2.0 - 3.0 at MySQL. There no bottlenecks in application (we are gathering metrics, standart time needed for rendering
responce always around 0.015 seconds)
The problem is following: sporadically, we seeing timeouts in proxy-component going in row during 2-3 seconds.
Often timeouts equals to 3000, sometimes 9000 and rarely to 21000 milliseconds (this is somehow connected to SYN-packets?)
This even happens if proxy components placed on same machine with PHP-application (Apache+PHP)
We also noticed that:
threads on Mysql are during this timeouts have 'Reading from net'
statuses.
During timeouts Apache "status" page fills quickly (1-3 seconds) with 'W' processes (so all processes became in 'W', some in 'C' statuses)
Timeouts mostly appears when traffic increasing (evening), and this
problem disappears when traffic starts going down (evening->night)
During timeouts Load average increases to 5.0 - 20.0
Things which I tried and they do not help:
I played a lot with sysctl/net variables (somaxconn, buffers,
this does not help)
Turning off firewall
Turning off APC
(disabled it's usage in code)
Switching to persistent connections (in PHP) and from MySQL to MySQLi
Just now I found that iperf showing drop down in bandwidth during timeouts:
------------------------------------------------------------
Client connecting to localhost, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 122 KByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 54006 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
...
[ 3] 266.0-266.5 sec 24.0 MBytes 402 Mbits/sec
[ 3] 266.5-267.0 sec 24.4 MBytes 410 Mbits/sec
[ 3] 267.0-267.5 sec 24.0 MBytes 402 Mbits/sec
[ 3] 267.5-268.0 sec 24.4 MBytes 410 Mbits/sec
[ 3] 268.0-268.5 sec 24.0 MBytes 402 Mbits/sec
[ 3] 268.5-269.0 sec 18.6 MBytes 312 Mbits/sec
[ 3] 269.0-269.5 sec 2.42 MBytes 40.6 Mbits/sec
[ 3] 269.5-270.0 sec 7.87 MBytes 132 Mbits/sec
[ 3] 270.0-270.5 sec 2.30 MBytes 38.5 Mbits/sec
[ 3] 270.5-271.0 sec 2.84 MBytes 47.7 Mbits/sec
[ 3] 271.0-271.5 sec 5.59 MBytes 93.8 Mbits/sec
[ 3] 271.5-272.0 sec 3.42 MBytes 57.4 Mbits/sec
[ 3] 272.0-272.5 sec 2.83 MBytes 47.5 Mbits/sec
[ 3] 272.5-273.0 sec 13.5 MBytes 227 Mbits/sec
[ 3] 273.0-273.5 sec 24.2 MBytes 407 Mbits/sec
[ 3] 273.5-274.0 sec 24.1 MBytes 404 Mbits/sec
[ 3] 274.0-274.5 sec 24.3 MBytes 408 Mbits/sec
...
Notice, that only iperf client was launched with "iperf -c localhost -i0.5 -b5000000000 -t3000" command.
What is the issue which leads to such timeouts? Is this CentOS-related ?
Thanks,
Arsen

Nagios check_ntp_peer not working

I am running a virtualized (vmware) debian (2.6.26-2-686) which I monitor through Nagios. Lastly, I am getting the following Critical error (reported by the _check_ntp_peer_ script):
NTP CRITICAL: Server not synchronized, Offset unknown
It calls my attent
ion that none of the lines outputted by the _ntpq –no_ command has a star (*)
remote refid st t when poll reach delay offset jitter
==============================================================================
200.144.121.33 193.204.114.232 2 u 1 64 1 187.298 -34742. 32.024
146.164.53.65 200.20.186.75 2 u 2 64 1 185.574 -34716. 0.001
200.160.0.8 200.160.7.186 2 u 1 64 1 186.229 -34734. 0.001
187.49.33.13 .INIT. 16 u - 64 0 0.000 0.000 0.001
Any clue?
Here is the ntp.conf
tinker panic 0
driftfile /var/lib/ntp/ntp.drift
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
server 0.debian.pool.ntp.org iburst dynamic
server 1.debian.pool.ntp.org iburst dynamic
server 2.debian.pool.ntp.org iburst dynamic
server 3.debian.pool.ntp.org iburst dynamic
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery
restrict 127.0.0.1
restrict ::1
So, any idea of what the problem could be?
Thanks in advance,
Wilmer
I had similar problems on ubuntu and ntp. Time was drifting off dramatically and nagios reported NTP CRITICAL: Offset unknown.
Check for status of your vmware timesync
#vmware-toolbox-cmd timesync status
Disable
Enable it if you notice it is disabled.
#vmware-toolbox-cmd timesync enable
Enabled
Helped in my case. May be helpful in yours too. I think it is not in accordance with vmware best practices but it works.

Resources