I'm writing a Rust app that uses a lot of threads. I noticed the CPU usage was high so I did top and then hit H to see the threads:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
247759 root 20 0 3491496 104400 64676 R 32.2 1.0 0:02.98 my_app
247785 root 20 0 3491496 104400 64676 S 22.9 1.0 0:01.89 llvmpipe-0
247786 root 20 0 3491496 104400 64676 S 21.9 1.0 0:01.71 llvmpipe-1
247792 root 20 0 3491496 104400 64676 S 20.9 1.0 0:01.83 llvmpipe-7
247789 root 20 0 3491496 104400 64676 S 20.3 1.0 0:01.60 llvmpipe-4
247790 root 20 0 3491496 104400 64676 S 20.3 1.0 0:01.64 llvmpipe-5
247787 root 20 0 3491496 104400 64676 S 19.9 1.0 0:01.70 llvmpipe-2
247788 root 20 0 3491496 104400 64676 S 19.9 1.0 0:01.61 llvmpipe-3
What are these llvmpipe-n threads? Why my_app launches them? Are them even from my_app for sure?
As HHK links to, the llvmpipe threads are from your OpenGL driver, which is Mesa.
You said you are running this in a VM. VMs usually don't virtualize GPU hardware, so the Mesa OpenGL driver is doing sofware rendering. To achieve better performance, Mesa spawns threads to do parallel computations on the CPU.
Related
I'm interested in obtaining the PID of a thread created inside a Rust program. As stated in the documentation, thread::id() does not work for this purpose. I found Get the current thread id and process id as integers? that seemed like the answer, but my experiments show it doesn't work.
This is the code:
extern crate rand;
extern crate libc;
use std::thread::{self, Builder};
use std::process::{self, Command};
use rand::thread_rng;
use rand::RngCore;
use std::time::Duration;
use std::os::unix::thread::JoinHandleExt;
use libc::pthread_t;
fn main() {
let main_pid = process::id();
println!("This PID {}", main_pid);
let b = Builder::new().name(String::from("LongRunningThread")).spawn(move || {
let mut rng = thread_rng();
let spawned_pid = process::id();
println!("Spawned PID {}", spawned_pid);
loop {
let u = rng.next_u64() % 1000;
println!("Processing request {}", u);
thread::sleep(Duration::from_millis(u));
}
}).expect("Could not spawn worker thread");
let p_threadid : pthread_t = b.as_pthread_t();
println!("Spawned p_threadid {}", p_threadid);
let thread_id = b.thread().id();
println!("Spawned thread_id {:?}", thread_id);
thread::sleep(Duration::from_millis(60_000));
}
The output from running the program inside a Linux machine is the following:
This PID 8597
Spawned p_threadid 139858455706368. <-- Clearly wrong
Spawned thread_id ThreadId(1) <-- Clearly wrong
Spawned PID 8597
Processing request 289
Processing request 476
Processing request 361
Processing request 567
The following is an excerpt from the output of htop in my system:
6164 1026 root 20 0 98M 7512 6512 S 0.0 0.0 0:00.03 │ ├─ sshd: dash [priv]
6195 6164 dash 20 0 98M 4176 3176 S 0.0 0.0 0:00.20 │ │ └─ sshd: dash#pts/11
6196 6195 dash 20 0 22964 5648 3408 S 0.0 0.0 0:00.09 │ │ └─ -bash
8597 6196 dash 20 0 2544 4 0 S 0.0 0.0 0:00.00 │ │ └─ ./process_priorities
8598 6196 dash 20 0 2544 4 0 S 0.0 0.0 0:00.00 │ │ └─ LongRunningThre
The PID I want from the spawned thread is 8598, but I can't figure out how to obtain it in a Rust program. Any ideas?
I found the answer using an existing crate called Palaver. It includes a gettid() that works across platforms. The only caveat is that the default configuration of the crate uses nightly features, so if you are on stable, make sure to disable them like this palaver = { version = "*", default-features = false }
Now, when I run the code that uses gettid(), this is the output:
This PID 9009
Priority changed for thread
Spawned PID 9010
Processing request 803
Processing request 279
Processing request 624
And the output from my htop:
6164 1026 root 20 0 98M 7512 6512 S 0.0 0.0 0:00.03 │ ├─ sshd: dash [priv]
6195 6164 dash 20 0 98M 4176 3176 S 0.0 0.0 0:00.21 │ │ └─ sshd: dash#pts/11
6196 6195 dash 20 0 22964 5648 3408 S 0.0 0.0 0:00.10 │ │ └─ -bash
9009 6196 dash 20 0 2544 4 0 S 0.0 0.0 0:00.00 │ │ └─ ./process_priorities
9010 6196 dash 20 0 2544 4 0 S 0.0 0.0 0:00.00 │ │ └─ LongRunningThre
I'm creating a node program to return the output of linux top command, is working fine the only issue is that the name of command is cutted, instead the full command name like /usr/local/libexec/netdata/plugins.d/apps.plugin 1 returns /usr/local+
My code
const topparser=require("topparser")
const spawn = require('child_process').spawn
let proc=null
let startTime=0
exports.start=function(pid_limit,callback){
startTime=new Date().getTime()
proc = spawn('top', ['-c','-b',"-d","3"])
console.log("started process, pid: "+proc.pid)
let top_data=""
proc.stdout.on('data', function (data) {
console.log('stdout: ' + data);
})
proc.on('close', function (code) {
console.log('child process exited with code ' + code);
});
}//start
exports.stop=function(){
console.log("stoped process...")
if(proc){proc.kill('SIGINT')}// SIGHUP -linux ,SIGINT -windows
}//stop
The results
14861 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kworker/1+
14864 root 20 0 0 0 0 S 0.0 0.0 0:00.02 [kworker/0+
15120 root 39 19 102488 3344 2656 S 0.0 0.1 0:00.09 /usr/bin/m+
16904 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kworker/0+
19031 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kworker/u+
21500 root 20 0 0 0 0 Z 0.0 0.0 0:00.00 [dsc] <def+
22571 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kworker/0+
Any way to fix it?
Best regards
From a top manpage:
In Batch mode, when used without an argument top will format output using the COLUMNS= and LINES=
environment variables, if set. Otherwise, width will be fixed at the maximum 512 columns. With an
argument, output width can be decreased or increased (up to 512) but the number of rows is consid‐
ered unlimited.
Add '-w', '512' to the arguments.
Since you work with node, you can query netdata running on localhost for this.
Example:
http://london.my-netdata.io/api/v1/data?chart=apps.cpu&after=-1&options=ms
For localhost netdata:
http://localhost:19999/api/v1/data?chart=apps.cpu&after=-1&options=ms
You can also get systemd services:
http://london.my-netdata.io/api/v1/data?chart=services.cpu&after=-1&options=ms
If you are not planning to update the screen per second, you can instruct netdata to return the average of a longer duration:
http://london.my-netdata.io/api/v1/data?chart=apps.cpu&after=-5&points=1&group=average&options=ms
The above returns the average of the last 5 seconds.
Finally, you get the latest values all the metrics netdata monitors, with this:
http://london.my-netdata.io/api/v1/allmetrics?format=json
For completeness, netdata can export all the metrics in BASH format for shell scripts. Check this: https://github.com/firehol/netdata/wiki/receiving-netdata-metrics-from-shell-scripts
all
I have a embedded board, run linux OS. and I use yaffs2 as rootfs.
I run a program on it, but after some times, it got a error "error No space left on device.". but I checked the flash, there still have a lot free space.
I just write some config file. the config file is rarely update. the program will write some log to flash. log size is limited to 2M.
I don't know why, and how to solve.
Help me please!(my first language is not English,sorry. hope you understand what I say)
some debug info:
# ./write_test
version 1.0
close file :: No space left on device
return errno 28
# cat /proc/yaffs
YAFFS built:Nov 23 2015 16:57:34
Device 0 "rootfs"
start_block........... 0
end_block............. 511
total_bytes_per_chunk. 2048
use_nand_ecc.......... 1
no_tags_ecc........... 1
is_yaffs2............. 1
inband_tags........... 0
empty_lost_n_found.... 0
disable_lazy_load..... 0
refresh_period........ 500
n_caches.............. 10
n_reserved_blocks..... 5
always_check_erased... 0
data_bytes_per_chunk.. 2048
chunk_grp_bits........ 0
chunk_grp_size........ 1
n_erased_blocks....... 366
blocks_in_checkpt..... 0
n_tnodes.............. 749
n_obj................. 477
n_free_chunks......... 23579
n_page_writes......... 6092
n_page_reads.......... 11524
n_erasures............ 96
n_gc_copies........... 5490
all_gcs............... 1136
passive_gc_count...... 1136
oldest_dirty_gc_count. 95
n_gc_blocks........... 96
bg_gcs................ 96
n_retired_writes...... 0
n_retired_blocks...... 0
n_ecc_fixed........... 0
n_ecc_unfixed......... 0
n_tags_ecc_fixed...... 0
n_tags_ecc_unfixed.... 0
cache_hits............ 0
n_deleted_files....... 0
n_unlinked_files...... 289
refresh_count......... 1
n_bg_deletions........ 0
Device 2 "data"
start_block........... 0
end_block............. 927
total_bytes_per_chunk. 2048
use_nand_ecc.......... 1
no_tags_ecc........... 1
is_yaffs2............. 1
inband_tags........... 0
empty_lost_n_found.... 0
disable_lazy_load..... 0
refresh_period........ 500
n_caches.............. 10
n_reserved_blocks..... 5
always_check_erased... 0
data_bytes_per_chunk.. 2048
chunk_grp_bits........ 0
chunk_grp_size........ 1
n_erased_blocks....... 10
blocks_in_checkpt..... 0
n_tnodes.............. 4211
n_obj................. 24
n_free_chunks......... 658
n_page_writes......... 430
n_page_reads.......... 467
n_erasures............ 7
n_gc_copies........... 421
all_gcs............... 20
passive_gc_count...... 13
oldest_dirty_gc_count. 3
n_gc_blocks........... 6
bg_gcs................ 4
n_retired_writes...... 0
n_retired_blocks...... 0
n_ecc_fixed........... 0
n_ecc_unfixed......... 0
n_tags_ecc_fixed...... 0
n_tags_ecc_unfixed.... 0
cache_hits............ 0
n_deleted_files....... 0
n_unlinked_files...... 2
refresh_count......... 1
n_bg_deletions........ 0
#
log and config file stored in "data".
thanks!!
In General this could be your disk space (here Flash), first of all check your flash space with with df -h (or other commands you have.. df is present in BusyBox). But if your flash space (specially on your program partition) is ok, this could be your "inode" (directory) space problem, you could see your inode usage with df -i command. (a good link for this: https://wiki.gentoo.org/wiki/Knowledge_Base:No_space_left_on_device_while_there_is_plenty_of_space_available)
If non of these is the problem cause, I think you have to have a deeper look at your code, specially if you deal with disk I/O!
Also good to mention that be aware of memory & heap space & free all allocated spaces in you functions.
In the process of doing some simple benchmarking, I came across something that surprised me. Take this snippet from Network.Socket.Splice:
hSplice :: Int -> Handle -> Handle -> IO ()
hSplice len s t = do
a <- mallocBytes len :: IO (Ptr Word8)
finally
(forever $! do
bytes <- hGetBufSome s a len
if bytes > 0
then hPutBuf t a bytes
else throwRecv0)
(free a)
One would expect that hGetBufSome and hPutBuf here would not need to allocate memory, as they write into and read from a pre-allocated buffer. The docs seem to back this intuition up... But alas:
individual inherited
COST CENTRE %time %alloc %time %alloc bytes
hSplice 0.5 0.0 38.1 61.1 3792
hPutBuf 0.4 1.0 19.8 29.9 12800000
hPutBuf' 0.4 0.4 19.4 28.9 4800000
wantWritableHandle 0.1 0.1 19.0 28.5 1600000
wantWritableHandle' 0.0 0.0 18.9 28.4 0
withHandle_' 0.0 0.1 18.9 28.4 1600000
withHandle' 1.0 3.8 18.8 28.3 48800000
do_operation 1.1 3.4 17.8 24.5 44000000
withHandle_'.\ 0.3 1.1 16.7 21.0 14400000
checkWritableHandle 0.1 0.2 16.4 19.9 3200000
hPutBuf'.\ 1.1 3.3 16.3 19.7 42400000
flushWriteBuffer 0.7 1.4 12.1 6.2 17600000
flushByteWriteBuffer 11.3 4.8 11.3 4.8 61600000
bufWrite 1.7 6.9 3.0 9.9 88000000
copyToRawBuffer 0.1 0.2 1.2 2.8 3200000
withRawBuffer 0.3 0.8 1.2 2.6 10400000
copyToRawBuffer.\ 0.9 1.7 0.9 1.7 22400000
debugIO 0.1 0.2 0.1 0.2 3200000
debugIO 0.1 0.2 0.1 0.2 3200016
hGetBufSome 0.0 0.0 17.7 31.2 80
wantReadableHandle_ 0.0 0.0 17.7 31.2 32
wantReadableHandle' 0.0 0.0 17.7 31.2 0
withHandle_' 0.0 0.0 17.7 31.2 32
withHandle' 1.6 2.4 17.7 31.2 30400976
do_operation 0.4 2.4 16.1 28.8 30400880
withHandle_'.\ 0.5 1.1 15.8 26.4 14400288
checkReadableHandle 0.1 0.4 15.3 25.3 4800096
hGetBufSome.\ 8.7 14.8 15.2 24.9 190153648
bufReadNBNonEmpty 2.6 4.4 6.1 8.0 56800000
bufReadNBNonEmpty.buf' 0.0 0.4 0.0 0.4 5600000
bufReadNBNonEmpty.so_far' 0.2 0.1 0.2 0.1 1600000
bufReadNBNonEmpty.remaining 0.2 0.1 0.2 0.1 1600000
copyFromRawBuffer 0.1 0.2 2.9 2.8 3200000
withRawBuffer 1.0 0.8 2.8 2.6 10400000
copyFromRawBuffer.\ 1.8 1.7 1.8 1.7 22400000
bufReadNBNonEmpty.avail 0.2 0.1 0.2 0.1 1600000
flushCharReadBuffer 0.3 2.1 0.3 2.1 26400528
I have to assume this is on purpose... but I have no idea what that purpose might be. Even worse: I'm just barely clever enough to get this profile, but not quite clever enough to figure out exactly what's being allocated.
Any help along those lines would be appreciated.
UPDATE: I've done some more profiling with two drastically simplified testcases. The first testcase directly uses the read/write ops from System.Posix.Internals:
echo :: Ptr Word8 -> IO ()
echo buf = forever $ do
threadWaitRead $ Fd 0
len <- c_read 0 buf 1
c_write 1 buf (fromIntegral len)
yield
As you'd hope, this allocates no memory on the heap each time through the loop. The second testcase uses the read/write ops from GHC.IO.FD:
echo :: Ptr Word8 -> IO ()
echo buf = forever $ do
len <- readRawBufferPtr "read" stdin buf 0 1
writeRawBufferPtr "write" stdout buf 0 (fromIntegral len)
UPDATE #2: I was advised to file this as a bug in GHC Trac... I'm still not sure it actually is a bug (as opposed to intentional behavior, a known limitation, or whatever) but here it is: https://ghc.haskell.org/trac/ghc/ticket/9696
I'll try to guess based on the code
Runtime tries to optimize small reads and writes, so it maintains internal buffer. If your buffer is 1 byte long, it will be inefficient to use it dirrectly. So internal buffer is used to read bigger chunk of data. It is probably ~32Kb long. Plus something similar for writing. Plus your own buffer.
The code has an optimization -- if you provide buffer bigger then the internal one, and the later is empty, it will use your buffer dirrectly. But the internal buffer is already allocated, so it will not less memory usage. I don't know how to dissable internal buffer, but you can open feature request if it is important for you.
(I realize that my guess can be totally wrong.)
ADD:
This one does seem to allocate, but I still don't know why.
What is your concern, max memory usage or number of allocated bytes?
c_read is a C function, it doesn't allocate on haskell's heap (but may allocate on C heap.)
readRawBufferPtr is Haskell function, and it is usual for haskell functions to allocate a lot of memory, that quickly becomes a garbage. Simply because of immutability. It is common for haskell program to allocate e.g 100Gb while memory usage is under 1Mb.
It seems like the conclusion is: it's a bug.
There are two C++ processes, one thread in each process. The thread handles network traffic (Diameter) from 32 incoming TCP connections, parses it and forwards split messages via 32 outgoing TCP connections. Let's call this C++ process a DiameterFE.
If only one DiameterFE process is running, it can handle 70 000 messages/sec.
If two DiameterFE processes are running, they can handle 35 000 messages/sec each, so the same 70 000 messages/sec in total.
Why don't they scale? What is a bottleneck?
Details:
There are 32 Clients (seagull) and 32 servers (seagull) for each Diameter Front End process, running on separate hosts.
A dedicated host is given for these two processes - 2 E5-2670 # 2.60GHz CPUs x 8 cores/socket x 2 HW threads/core = 32 threads in total.
10 GBit/sec network.
Average Diameter message size is 700 bytes.
It looks like only the Cpu0 handles network traffic - 58.7%si. Do I have to explicitly configure different network queues to different CPUs?
The first process (PID=7615) takes 89.0 % CPU, it is running on Cpu0.
The second process (PID=59349) takes 70.8 % CPU, it is running on Cpu8.
On the other hand, Cpu0 is loaded at: 95.2% = 9.7%us + 26.8%sy + 58.7%si,
whereas Cpu8 is loaded only at 70.3% = 14.8%us + 55.5%sy
It looks like the Cpu0 is doing the work also for the second process. There is very high softirq and only on the Cpu0 = 58.7%. Why?
Here is the top output with key "1" pressed:
top - 15:31:55 up 3 days, 9:28, 5 users, load average: 0.08, 0.20, 0.47
Tasks: 973 total, 3 running, 970 sleeping, 0 stopped, 0 zombie
Cpu0 : 9.7%us, 26.8%sy, 0.0%ni, 4.8%id, 0.0%wa, 0.0%hi, 58.7%si, 0.0%st
...
Cpu8 : 14.8%us, 55.5%sy, 0.0%ni, 29.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
...
Cpu31 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 396762772k total, 5471576k used, 391291196k free, 354920k buffers
Swap: 1048568k total, 0k used, 1048568k free, 2164532k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7615 test1 20 0 18720 2120 1388 R 89.0 0.0 52:35.76 diameterfe
59349 test1 20 0 18712 2112 1388 R 70.8 0.0 121:02.37 diameterfe
610 root 20 0 36080 1364 1112 S 2.6 0.0 126:45.58 plymouthd
3064 root 20 0 10960 788 432 S 0.3 0.0 2:13.35 irqbalance
16891 root 20 0 15700 2076 1004 R 0.3 0.0 0:01.09 top
1 root 20 0 19364 1540 1232 S 0.0 0.0 0:05.20 init
...
The fix of this issue was to upgrade the kernel to 2.6.32-431.20.3.el6.x86_64 .
After that network interrupts and message queues are distributed among different CPUs.