High CPU Usage in php-fpm - linux

We have very strange problem on our Web-project.
We use:
2 Intel(R) Xeon(R) CPU E5520 # 2.27GHz
12 GB memory
We have about 20 hits per seconds. 4-5 requests per second are heavy – it is a search requests.
We use nginx + php-fpm (5.3.22)
MySQL server installed on another machine.
Most of time we have load average less than 10 and cpu usage about 50%
Sometimes we get cpu usage about 95% and after that load average grows to 50 and more!!!
You can see Load Average and CPU Usage here (my reputation low to send images here)
Load Average
CPU Usage
We have to reload php-fpm ( /etc/init.d/php-fpm reload) to normalize situation.
This can happens 4-5 times per day.
I tried to use strace to exam this situation.
Sorry for long logs! This output of command strace -cp PID
PID – is the random php-fpm process id (We start 100 php-fpm processes).
This two results in the moment with high cpu usage.
Process 17272 attached - interrupt to quit
Process 17272 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
65.56 0.008817 267 33 munmap
13.38 0.001799 900 2 clone
9.66 0.001299 2 589 read
7.43 0.000999 125 8 mremap
2.84 0.000382 1 559 96 access
0.59 0.000080 40 2 waitpid
0.29 0.000039 0 627 gettimeofday
0.16 0.000022 0 346 write
0.04 0.000006 0 56 getcwd
0.04 0.000005 0 348 poll
0.00 0.000000 0 55 open
0.00 0.000000 0 69 close
0.00 0.000000 0 17 chdir
0.00 0.000000 0 189 time
0.00 0.000000 0 28 lseek
0.00 0.000000 0 2 pipe
0.00 0.000000 0 17 times
0.00 0.000000 0 8 brk
0.00 0.000000 0 8 getrusage
0.00 0.000000 0 18 setitimer
0.00 0.000000 0 8 flock
0.00 0.000000 0 1 nanosleep
0.00 0.000000 0 11 rt_sigaction
0.00 0.000000 0 13 rt_sigprocmask
0.00 0.000000 0 6 pread64
0.00 0.000000 0 7 pwrite64
0.00 0.000000 0 33 mmap2
0.00 0.000000 0 18 4 stat64
0.00 0.000000 0 34 lstat64
0.00 0.000000 0 92 fstat64
0.00 0.000000 0 63 fcntl64
0.00 0.000000 0 53 clock_gettime
0.00 0.000000 0 1 socket
0.00 0.000000 0 1 1 connect
0.00 0.000000 0 9 accept
0.00 0.000000 0 1 send
0.00 0.000000 0 21 recv
0.00 0.000000 0 9 1 shutdown
0.00 0.000000 0 1 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.013448 3363 102 total
[root#hp-php ~]# strace -cp 30767
Process 30767 attached - interrupt to quit
Process 30767 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
52.88 0.016926 220 77 munmap
29.06 0.009301 2 4343 read
8.73 0.002794 466 6 clone
3.59 0.001149 0 5598 time
3.18 0.001017 0 3745 write
1.12 0.000358 0 7316 gettimeofday
0.64 0.000205 1 164 fcntl64
0.39 0.000124 21 6 waitpid
0.22 0.000070 0 1496 326 access
0.13 0.000041 0 3769 poll
0.03 0.000009 0 151 close
0.02 0.000008 0 114 clock_gettime
0.02 0.000007 0 110 getcwd
0.00 0.000000 0 112 open
0.00 0.000000 0 38 chdir
0.00 0.000000 0 47 lseek
0.00 0.000000 0 6 pipe
0.00 0.000000 0 38 times
0.00 0.000000 0 135 brk
0.00 0.000000 0 3 ioctl
0.00 0.000000 0 14 getrusage
0.00 0.000000 0 38 setitimer
0.00 0.000000 0 19 flock
0.00 0.000000 0 40 mlock
0.00 0.000000 0 40 munlock
0.00 0.000000 0 6 nanosleep
0.00 0.000000 0 27 rt_sigaction
0.00 0.000000 0 31 rt_sigprocmask
0.00 0.000000 0 13 pread64
0.00 0.000000 0 18 pwrite64
0.00 0.000000 0 78 mmap2
0.00 0.000000 0 111 10 stat64
0.00 0.000000 0 49 lstat64
0.00 0.000000 0 182 fstat64
0.00 0.000000 0 8 socket
0.00 0.000000 0 8 5 connect
0.00 0.000000 0 19 accept
0.00 0.000000 0 7 send
0.00 0.000000 0 66 recv
0.00 0.000000 0 3 recvfrom
0.00 0.000000 0 20 1 shutdown
0.00 0.000000 0 5 setsockopt
0.00 0.000000 0 4 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.032009 28080 342 total
Yes, out scripts reads much information. This is normal.
But why munmap works very long??!! And when we have problem munmap ALWAYS in top!
For comparison this is result of trace random php-fpm process in regular situation:
Process 28606 attached - interrupt to quit
Process 28606 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
45.72 0.001816 1 2601 read
32.88 0.001306 435 3 clone
9.19 0.000365 0 2175 write
6.95 0.000276 0 7521 time
2.24 0.000089 0 4158 gettimeofday
2.01 0.000080 1 114 brk
0.28 0.000011 0 2166 poll
0.20 0.000008 0 833 155 access
0.20 0.000008 0 53 recv
0.18 0.000007 2 3 waitpid
0.15 0.000006 0 18 munlock
0.00 0.000000 0 69 open
0.00 0.000000 0 96 close
0.00 0.000000 0 29 chdir
0.00 0.000000 0 36 lseek
0.00 0.000000 0 3 pipe
0.00 0.000000 0 29 times
0.00 0.000000 0 10 getrusage
0.00 0.000000 0 5 munmap
0.00 0.000000 0 1 ftruncate
0.00 0.000000 0 29 setitimer
0.00 0.000000 0 1 sigreturn
0.00 0.000000 0 11 flock
0.00 0.000000 0 18 mlock
0.00 0.000000 0 5 nanosleep
0.00 0.000000 0 19 rt_sigaction
0.00 0.000000 0 24 rt_sigprocmask
0.00 0.000000 0 6 pread64
0.00 0.000000 0 12 pwrite64
0.00 0.000000 0 69 getcwd
0.00 0.000000 0 5 mmap2
0.00 0.000000 0 35 7 stat64
0.00 0.000000 0 41 lstat64
0.00 0.000000 0 96 fstat64
0.00 0.000000 0 108 fcntl64
0.00 0.000000 0 87 clock_gettime
0.00 0.000000 0 5 socket
0.00 0.000000 0 4 4 connect
0.00 0.000000 0 16 2 accept
0.00 0.000000 0 8 send
0.00 0.000000 0 15 shutdown
0.00 0.000000 0 4 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.003972 20541 168 total
Process 29168 attached - interrupt to quit
Process 29168 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
54.81 0.002366 1 1717 read
26.41 0.001140 1 1696 poll
8.29 0.000358 0 1662 write
7.37 0.000318 2 131 121 stat64
1.53 0.000066 0 3249 gettimeofday
1.18 0.000051 0 746 525 access
0.23 0.000010 0 27 fcntl64
0.19 0.000008 0 62 brk
0.00 0.000000 0 1 restart_syscall
0.00 0.000000 0 7 open
0.00 0.000000 0 16 close
0.00 0.000000 0 3 chdir
0.00 0.000000 0 1039 time
0.00 0.000000 0 1 lseek
0.00 0.000000 0 3 times
0.00 0.000000 0 3 ioctl
0.00 0.000000 0 1 getrusage
0.00 0.000000 0 4 munmap
0.00 0.000000 0 3 setitimer
0.00 0.000000 0 1 sigreturn
0.00 0.000000 0 1 flock
0.00 0.000000 0 1 rt_sigaction
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 2 pwrite64
0.00 0.000000 0 3 getcwd
0.00 0.000000 0 4 mmap2
0.00 0.000000 0 7 fstat64
0.00 0.000000 0 9 clock_gettime
0.00 0.000000 0 6 socket
0.00 0.000000 0 5 1 connect
0.00 0.000000 0 3 2 accept
0.00 0.000000 0 5 send
0.00 0.000000 0 64 recv
0.00 0.000000 0 3 recvfrom
0.00 0.000000 0 2 shutdown
0.00 0.000000 0 1 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.004317 10489 649 total
And you can see that munmap not in top.
Now we don’t have ideas how to solve this problem :(
We examined next potential problems and answers are "NO":
additioan user activity
long scripts execution (several seconds)
using swap
Can you help us?

Related

Finding the source of latency in an RPC system

As the name suggests, I am using a simple RPC system between a PC (windows x64) and an embedded linux PC running ubuntu. The embedded linux pc is the RPC server and the PC is the RPC client. The RPC framework is: erpc.
I have noticed that the transaction rate I am getting is particularly low - on the order of 20 transactions/sec.
The issue is definitely not hardware related as I have an alternate RPC system (which I'm trying to replace with the contentious one) which can easily get over 1000 transactions/sec using the exact same hardware configuration.
To further prove this,I also wrote a simple python script which acts as a simple socket client or server depending on a switch. I run it on the embedded machine as a server and as a client on the pc. The script simply has the client send some random data to the server which in turn sends the data back. The client does this a few hundred times and determines the transaction rate based on this. The amount of data transmitted is of the same order as what erpc uses. Using this setup I can get 3000+ transactions/sec.
The RPC system in question is half duplex. Only a single thread is used. Server recvs, processes the request and sends the response in a loop.
Only a single socket is used for the duration of the test. I.e. no close and accepts occur during the loop. No other IO occurs. Or at least, I have refactored it for the purposes of these tests to not do any other IO.
On the Windows client side, I have a python unit test which I have run with profiling on. The results don't seem to indicate that the problem is on the client.
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 23.998 23.998 runner.py:105(pytest_runtest_call)
1 0.000 0.000 23.998 23.998 python.py:1313(runtest)
1 0.000 0.000 23.998 23.998 __init__.py:603(__call__)
1 0.000 0.000 23.998 23.998 __init__.py:219(_hookexec)
1 0.000 0.000 23.998 23.998 __init__.py:213(<lambda>)
1 0.000 0.000 23.998 23.998 callers.py:151(_multicall)
1 0.000 0.000 23.998 23.998 python.py:183(pytest_pyfunc_call)
1 0.003 0.003 23.998 23.998 test_static_if.py:4(test_read_version)
400 0.014 0.000 23.993 0.060 client.py:16(get_version)
400 0.017 0.000 23.942 0.060 client.py:79(perform_request)
400 0.006 0.000 23.828 0.060 transport.py:75(receive)
800 0.016 0.000 23.820 0.030 transport.py:139(_base_receive)
800 23.803 0.030 23.803 0.030 {method 'recv' of '_socket.socket' objects}
400 0.007 0.000 0.061 0.000 transport.py:65(send)
400 0.002 0.000 0.053 0.000 transport.py:135(_base_send)
400 0.050 0.000 0.050 0.000 {method 'sendall' of '_socket.socket' objects}
400 0.012 0.000 0.032 0.000 basic_codec.py:113(start_read_message)
400 0.006 0.000 0.015 0.000 basic_codec.py:39(start_write_message)
1600 0.007 0.000 0.015 0.000 basic_codec.py:130(_read)
800 0.002 0.000 0.012 0.000 basic_codec.py:156(read_uint32)
The server is a C++ application. I have tried profiling it with gprof but the results of that show practically no time consumed by the application at all. After reading up a bit more about how gprof works and how gprof doesn't accumulate time spent in system calls, this indicates that the program is (obviously) IO bound and that the vast majority of time is spent in blocking system calls.
I won't add the entire output here for brevity but below is an exerpt:
Flat profile:
Each sample counts as 0.01 seconds.
no time accumulated
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
0.00 0.00 0.00 2407 0.00 0.00 erpc::MessageBuffer::get()
0.00 0.00 0.00 2400 0.00 0.00 erpc::MessageBuffer::setUsed(unsigned short)
0.00 0.00 0.00 2000 0.00 0.00 erpc::MessageBuffer::getUsed() const
0.00 0.00 0.00 1600 0.00 0.00 erpc::MessageBuffer::Cursor::write(void const*, unsigned int)
0.00 0.00 0.00 1201 0.00 0.00 erpc::Codec::getBuffer()
0.00 0.00 0.00 803 0.00 0.00 erpc::MessageBuffer::Cursor::set(erpc::MessageBuffer*)
0.00 0.00 0.00 803 0.00 0.00 erpc::MessageBuffer::getLength() const
0.00 0.00 0.00 802 0.00 0.00 erpc::Codec::reset()
0.00 0.00 0.00 801 0.00 0.00 erpc::TCPTransport::underlyingReceive(unsigned char*, unsigned int)
0.00 0.00 0.00 800 0.00 0.00 erpc::TCPTransport::underlyingSend(unsigned char const*, unsigned int)
0.00 0.00 0.00 800 0.00 0.00 erpc::BasicCodec::read(unsigned int*)
0.00 0.00 0.00 800 0.00 0.00 erpc::BasicCodec::write(int)
0.00 0.00 0.00 800 0.00 0.00 erpc::BasicCodec::write(unsigned int)
0.00 0.00 0.00 800 0.00 0.00 erpc::MessageBuffer::Cursor::read(void*, unsigned int)
0.00 0.00 0.00 800 0.00 0.00 erpc::Service::getServiceId() const
0.00 0.00 0.00 403 0.00 0.00 erpc::Service::getNext()
0.00 0.00 0.00 401 0.00 0.00 erpc::SimpleServer::runInternal(erpc::Codec*)
0.00 0.00 0.00 401 0.00 0.00 erpc::TCPTransport::accept()
0.00 0.00 0.00 401 0.00 0.00 erpc::TCPTransport::receive(erpc::MessageBuffer*)
0.00 0.00 0.00 401 0.00 0.00 erpc::FramedTransport::receive(erpc::MessageBuffer*)
0.00 0.00 0.00 400 0.00 0.00 write_p_version_t_struct(erpc::Codec*, p_version_t const*)
0.00 0.00 0.00 400 0.00 0.00 StaticIF_service::handleInvocation(unsigned int, unsigned int, erpc::Codec*, erpc::MessageBufferFactory*)
0.00 0.00 0.00 400 0.00 0.00 StaticIF_service::get_version_shim(erpc::Codec*, erpc::MessageBufferFactory*, unsigned int)
0.00 0.00 0.00 400 0.00 0.00 erpc::BasicCodec::endReadMessage()
0.00 0.00 0.00 400 0.00 0.00 erpc::BasicCodec::endWriteStruct()
0.00 0.00 0.00 400 0.00 0.00 erpc::BasicCodec::endWriteMessage()
0.00 0.00 0.00 400 0.00 0.00 erpc::BasicCodec::startReadMessage(erpc::_message_type*, unsigned int*, unsigned int*, unsigned int*)
0.00 0.00 0.00 400 0.00 0.00 erpc::BasicCodec::startWriteStruct()
0.00 0.00 0.00 400 0.00 0.00 erpc::BasicCodec::startWriteMessage(erpc::_message_type, unsigned int, unsigned int, unsigned int)
0.00 0.00 0.00 400 0.00 0.00 erpc::FramedTransport::send(erpc::MessageBuffer*)
0.00 0.00 0.00 400 0.00 0.00 erpc::MessageBufferFactory::prepareServerBufferForSend(erpc::MessageBuffer*)
0.00 0.00 0.00 400 0.00 0.00 erpc::Server::processMessage(erpc::Codec*, erpc::_message_type&)
0.00 0.00 0.00 400 0.00 0.00 erpc::Server::findServiceWithId(unsigned int)
0.00 0.00 0.00 400 0.00 0.00 get_version
0.00 0.00 0.00 5 0.00 0.00 erpc::ManuallyConstructed<erpc::SimpleServer>::get()
0.00 0.00 0.00 4 0.00 0.00 operator new(unsigned int, void*)
0.00 0.00 0.00 3 0.00 0.00 erpc::ManuallyConstructed<erpc::SimpleServer>::operator->()
0.00 0.00 0.00 2 0.00 0.00 erpc::ManuallyConstructed<erpc::TCPTransport>::get()
0.00 0.00 0.00 2 0.00 0.00 erpc::ManuallyConstructed<erpc::BasicCodecFactory>::get()
0.00 0.00 0.00 2 0.00 0.00 erpc::Server::addService(erpc::Service*)
0.00 0.00 0.00 2 0.00 0.00 erpc::Service::Service(unsigned int)
0.00 0.00 0.00 2 0.00 0.00 erpc::Service::~Service()
0.00 0.00 0.00 2 0.00 0.00 erpc_add_service_to_server
0.00 0.00 0.00 1 0.00 0.00 _GLOBAL__sub_I__Z5usagev
Using strace, the problem becomes apparent in the first recv of every request. For context, an initial header is transmitted first which indicates the amount of data the request proper contains.
Here's a couple of excerpts from the output (the full output is 2000 lines).
I used the -r, -T and -C switches which show a relative timestamp for each call, prints the time spent in each call and also shows the summary respectively.
In the transaction loop:
0.000161 recv(4, "\10\0", 2, 0) = 2 <0.059478>
0.059589 recv(4, "q\1\1\2\0\1\0\0", 8, 0) = 8 <0.000047>
0.000167 send(4, "\20\0", 2, 0) = 2 <0.000073>
0.000183 send(4, "q\1\1\2\2\1\0\0\235\256\322\2664\22\0\0", 16, 0) = 16 <0.000050>
0.000160 recv(4, "\10\0", 2, 0) = 2 <0.059513>
0.059625 recv(4, "r\1\1\2\0\1\0\0", 8, 0) = 8 <0.000046>
0.000167 send(4, "\20\0", 2, 0) = 2 <0.000071>
0.000182 send(4, "r\1\1\2\2\1\0\0\235\256\322\2664\22\0\0", 16, 0) = 16 <0.000049>
0.000161 recv(4, "\10\0", 2, 0) = 2 <0.059059>
0.059172 recv(4, "s\1\1\2\0\1\0\0", 8, 0) = 8 <0.000047>
0.000183 send(4, "\20\0", 2, 0) = 2 <0.000073>
0.000183 send(4, "s\1\1\2\2\1\0\0\235\256\322\2664\22\0\0", 16, 0) = 16 <0.000049>
0.000161 recv(4, "\10\0", 2, 0) = 2 <0.059330>
0.059441 recv(4, "t\1\1\2\0\1\0\0", 8, 0) = 8 <0.000046>
0.000166 send(4, "\20\0", 2, 0) = 2 <0.000072>
0.000182 send(4, "t\1\1\2\2\1\0\0\235\256\322\2664\22\0\0", 16, 0) = 16 <0.000050>
0.000163 recv(4, "\10\0", 2, 0) = 2 <0.059506>
0.059618 recv(4, "u\1\1\2\0\1\0\0", 8, 0) = 8 <0.000046>
0.000166 send(4, "\20\0", 2, 0) = 2 <0.000070>
0.000181 send(4, "u\1\1\2\2\1\0\0\235\256\322\2664\22\0\0", 16, 0) = 16 <0.000049>
0.000160 recv(4, "\10\0", 2, 0) = 2 <0.059359>
0.059488 recv(4, "v\1\1\2\0\1\0\0", 8, 0) = 8 <0.000048>
0.000175 send(4, "\20\0", 2, 0) = 2 <0.000077>
0.000189 send(4, "v\1\1\2\2\1\0\0\235\256\322\2664\22\0\0", 16, 0) = 16 <0.000051>
0.000165 recv(4, "\10\0", 2, 0) = 2 <0.059496>
0.059612 recv(4, "w\1\1\2\0\1\0\0", 8, 0) = 8 <0.000046>
0.000170 send(4, "\20\0", 2, 0) = 2 <0.000074>
0.000182 send(4, "w\1\1\2\2\1\0\0\235\256\322\2664\22\0\0", 16, 0) = 16 <0.000050>
The summary:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
98.59 0.010000 12 801 recv
1.41 0.000143 0 800 send
0.00 0.000000 0 12 read
0.00 0.000000 0 3 write
0.00 0.000000 0 25 19 open
0.00 0.000000 0 7 close
0.00 0.000000 0 1 execve
0.00 0.000000 0 8 lseek
0.00 0.000000 0 6 6 access
0.00 0.000000 0 3 brk
0.00 0.000000 0 1 readlink
0.00 0.000000 0 1 munmap
0.00 0.000000 0 2 setitimer
0.00 0.000000 0 1 uname
0.00 0.000000 0 9 mprotect
0.00 0.000000 0 5 writev
0.00 0.000000 0 2 rt_sigaction
0.00 0.000000 0 16 mmap2
0.00 0.000000 0 16 15 stat64
0.00 0.000000 0 6 fstat64
0.00 0.000000 0 1 socket
0.00 0.000000 0 1 bind
0.00 0.000000 0 1 listen
0.00 0.000000 0 1 accept
0.00 0.000000 0 1 setsockopt
0.00 0.000000 0 1 set_tls
------ ----------- ----------- --------- --------- ----------------
100.00 0.010143 1731 40 total
In passing, I am not sure I completely understand the summary. The summary suggests that recv happens very quick compared to the time indicated in each call to recv.
It looks like the time spent in the first recv is what is killing the RPC system at nearly 60ms per call. Am I misreading this? I am not sure of the units but so I am guessing seconds.
So, after profiling both the client and the server, it appears the vast amount of time is spent in recv.
If we assumed that extra time spent in the intitial recv on the server side was because the client was still processing something and hadn't send it yet, that should have shown up when profiling the client.
Any suggestions you may have as to how to further debug this would be greatly appreciated.
Thanks!

Figuring out Linux memory usage

I've got a bit weird Linux memory usage I'm trying to figure out.
I've got 2 processes: nxtcapture & nxtexport. None of these processes really allocate much memory however they both mmap a 1 TB file each. nxtexport has no heap allocations (apart from during startup). nxtcapture writes sequentially to the file and nxtexport reads sequentially. Since nxtexport reads from the tail of nxtcapture I don't really have any read IO.
ing992:~# iostat -m
Linux 4.4.52-nxt (ing992) 05/25/17 _x86_64_ (32 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
29.17 1.99 0.96 0.06 0.00 67.82
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
loop0 0.02 0.00 0.00 2 0
sdf 16.47 0.06 0.85 4207 61442
sdf1 0.00 0.00 0.00 5 0
sdf2 0.01 0.00 0.00 77 0
sdf3 16.45 0.06 0.85 4115 61442
sdf4 0.00 0.00 0.00 7 0
sde 15.45 0.01 0.85 1032 61442
sde1 0.00 0.00 0.00 5 0
sde2 0.00 0.00 0.00 0 0
sde3 15.44 0.01 0.85 1017 61442
sde4 0.00 0.00 0.00 7 0
sdb 43.08 0.00 15.72 22 1136368
sda 43.07 0.00 15.72 21 1136406
sdc 43.42 0.04 15.72 2711 1136332
sdd 43.07 0.00 15.72 20 1136301
md127 0.01 0.00 0.00 77 0
md126 23.77 0.07 0.85 5132 61145
This all great. However, looking at the memory usage I can see the following:
Which shows that more than half of my memory is unavailable?! How is this possible? I understand that mmap will keep pages cached. But shouldn't such (non-dirty) pages be counted as available? What's going on here? How can I debug this?
free -m
total used free shared buffers cached
Mem: 32020 31608 412 221 13 9655
-/+ buffers/cache: 21939 10081
Swap: 0 0 0

Bizarre netcat behavior, random socket?

Running Debian Jessie Linux. In learning nmap, I was using nc (netcat) in listen mode so that nmap would find an open port. All done on a single host.
I got the nc syntax wrong, so the command was 'nc -l 3306' where it should have been 'nc -l -p 3306'. nmap then did not report 3306 open but it DID show SOME port open, say 40000.
So I did 'lsof | grep nc' and indeed nc had 40000, or some similar high port open, bizarre.
I then straced nc. With the bad syntax of 'nc -l 3306', the relevant system call sequence is socket,listen,accept. Note no bind call. The correct 'nc -l -p 3306' command produces the expected socket,bind,listen,accept.
So, my point is, you can open a socket for accepting without specifying a port! Obviously, you don't know WHICH port you will get, but you DO get one. Now, if user space (i.e. nc) isn't supplying ANY bind info, that implies to me that kernel space is picking the port ?? I first expected the port to come from undefined stack memory in the nc code, but if no bind call is done, that memory's values would not be transferred to the kernel at all.
Bizarre!
Not weird at all, that's how netcat (nc) works (in certain versions will change its options) please reefer to this document.
netcat -l -p port [-options] [hostname] [port]
If you want to specify an specific port you must use "-p".
executing this:
netcat -l 3306
is just the same as executing:
netcat -l
which selects a random socket to listen.
Now, you are right, there's no bind call, but also other system calls are not present:
Calling with bad syntax
mortiz#florida:~/Documents/projects$ strace -c netcat -l 4000
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
15.09 0.000040 8 5 read
10.94 0.000029 7 4 openat
10.19 0.000027 4 6 rt_sigaction
9.81 0.000026 26 1 stat
9.81 0.000026 8 3 brk
7.55 0.000020 5 4 close
7.55 0.000020 20 1 socket
6.04 0.000016 4 4 fstat
6.04 0.000016 16 1 alarm
5.28 0.000014 7 2 getpid
5.28 0.000014 7 2 setsockopt
4.53 0.000012 12 1 listen
1.89 0.000005 5 1 rt_sigprocmask
0.00 0.000000 0 7 mmap
0.00 0.000000 0 4 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000265 50 1 total
Calling with the right syntax
mortiz#florida:~/Documents/projects$ strace -c netcat -l -p 4000
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
33.81 0.000334 8 41 32 openat
21.76 0.000215 6 33 28 stat
8.10 0.000080 5 14 mmap
5.97 0.000059 5 11 close
5.77 0.000057 4 14 read
4.76 0.000047 15 3 munmap
4.05 0.000040 20 2 2 connect
3.95 0.000039 4 9 fstat
3.34 0.000033 5 6 rt_sigaction
2.83 0.000028 9 3 socket
2.73 0.000027 4 6 mprotect
1.21 0.000012 2 6 lseek
0.61 0.000006 3 2 getpid
0.51 0.000005 5 1 listen
0.30 0.000003 3 1 rt_sigprocmask
0.30 0.000003 3 1 alarm
0.00 0.000000 0 3 brk
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 bind
0.00 0.000000 0 2 setsockopt
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000988 162 63 total
Calling with right syntax (without specific port)
mortiz#florida:~/Documents/projects$ strace -c netcat -l
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
17.62 0.000040 6 6 rt_sigaction
17.18 0.000039 7 5 read
13.66 0.000031 7 4 openat
12.33 0.000028 28 1 listen
9.69 0.000022 22 1 socket
7.93 0.000018 4 4 close
7.49 0.000017 8 2 setsockopt
3.96 0.000009 2 4 fstat
3.52 0.000008 8 1 rt_sigprocmask
3.52 0.000008 8 1 alarm
3.08 0.000007 3 2 getpid
0.00 0.000000 0 1 stat
0.00 0.000000 0 7 mmap
0.00 0.000000 0 4 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 3 brk
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000227 50 1 total
So we can see changing parameters obviously will change system calls (that has sense).
The fact that the port being assigned doesn't use bind() could be related to the different netcat implementations as #UrielKatz mentioned in my comment.

Plot 3 dimensional unequal length of data

I am new in gnuplot so excuse me if it looks simple. I have a data file like below and I want to draw a diagram like this:
t x = 0.00 0.20 0.40 0.60 0.80 1.00
0.00 0.000000 0.640000 0.960000 0.960000 0.640000 0.000000
0.02 0.000000 0.480000 0.800000 0.800000 0.480000 0.000000
0.04 0.000000 0.400000 0.640000 0.640000 0.400000 0.000000
0.06 0.000000 0.320000 0.520000 0.520000 0.320000 0.000000
0.08 0.000000 0.260000 0.420000 0.420000 0.260000 0.000000
0.10 0.000000 0.210000 0.340000 0.340000 0.210000 0.000000
0.12 0.000000 0.170000 0.275000 0.275000 0.170000 0.000000
0.14 0.000000 0.137500 0.222500 0.222500 0.137500 0.000000
0.16 0.000000 0.111250 0.180000 0.180000 0.111250 0.000000
0.18 0.000000 0.090000 0.145625 0.145625 0.090000 0.000000
0.20 0.000000 0.072813 0.117813 0.117813 0.072813 0.000000
GNU octave equivalent command is something like this:
mesh(tplot,xplot,ttplot);
Well as with many things, it is simple if you know how. This is straighforward to plot if you remove the x = and the t from the data file, e.g.:
0 0.00 0.20 0.40 0.60 0.80 1.00
0.00 0.000000 0.640000 0.960000 0.960000 0.640000 0.000000
0.02 0.000000 0.480000 0.800000 0.800000 0.480000 0.000000
0.04 0.000000 0.400000 0.640000 0.640000 0.400000 0.000000
0.06 0.000000 0.320000 0.520000 0.520000 0.320000 0.000000
0.08 0.000000 0.260000 0.420000 0.420000 0.260000 0.000000
0.10 0.000000 0.210000 0.340000 0.340000 0.210000 0.000000
0.12 0.000000 0.170000 0.275000 0.275000 0.170000 0.000000
0.14 0.000000 0.137500 0.222500 0.222500 0.137500 0.000000
0.16 0.000000 0.111250 0.180000 0.180000 0.111250 0.000000
0.18 0.000000 0.090000 0.145625 0.145625 0.090000 0.000000
0.20 0.000000 0.072813 0.117813 0.117813 0.072813 0.000000
Then the data can be interpreted as a "non-uniform" matrix, although it is uniform. This is useful as it reads the first row and first column correctly. See help matrix and help matrix nonuniform for more. For example:
echo 'splot "data" nonuniform matrix with lines' | gnuplot --persist
Gives me:
To make it similar to the output produced by the GNU Octave mesh command, do something like this:
set xlabel "x"
set ylabel "t"
set zlabel "u"
set view 20,210
set border 4095 lw 2
set hidden3d
set xyplane 0
set autoscale fix
set nokey
set notics
splot "data" nonuniform matrix lt -1 lw 2 with lines
Which results in:

I want to store the output of the below command in an variable

I want to store the output of the below command in an variable
This is my cmd
load=$(sar -q | awk -F'[\t] +\\' '{print $1,$2,$3,$4,$5,$6,$7 }')
When I am trying to store the output in an variable I am getting like this inux
3.13.0-45-generic
(vr1tel-Inspiron-3542)
03/27/2015
_i686_
(4
CPU)
06:47:44
AM
LINUX
RESTART
06:55:01
AM
runq-sz
plist-sz
ldavg-1
ldavg-5
ldavg-15
blocked
07:05:01
AM
2
449
1.08
1.01
0.76
0
07:15:01
AM
3
438
1.09
1.11
0.93
0
07:25:01
AM
0
434
0.29
0.69
0.85
0
Average:
2
440
0.82
0.94
but I want answer like this
Linux 3.13.0-45-generic (vr1tel-Inspiron-3542) 03/27/2015 i686 (4 CPU)
06:47:44 AM LINUX RESTART
06:55:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
07:05:01 AM 2 449 1.08 1.01 0.76 0
07:15:01 AM 3 438 1.09 1.11 0.93 0
07:25:01 AM 0 434 0.29 0.69 0.85 0
Average: 2 440 0.82 0.94 0.85 0
08:08:13 AM LINUX RESTART
08:34:19 AM LINUX RESTART
08:35:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
08:45:01 AM 0 437 0.26 0.51 0.46 1
08:55:01 AM 0 418 0.30 0.32 0.40 0
09:05:01 AM 0 348 1.18 0.60 0.48 0
09:15:01 AM 0 364 0.23 0.55 0.55 0
09:25:01 AM 0 364 0.42 0.39 0.46 0
09:35:01 AM 0 439 0.33 0.26 0.34 0
09:45:01 AM 0 469 0.38 0.40 0.36 0

Resources