Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
How can I determine the current CPU utilization from the shell in Linux?
For example, I get the load average like so:
cat /proc/loadavg
Outputs:
0.18 0.48 0.46 4/234 30719
Linux does not have any system variables that give the current CPU utilization. Instead, you have to read /proc/stat several times: each column in the cpu(n) lines gives the total CPU time, and you have to take subsequent readings of it to get percentages. See this document to find out what the various columns mean.
You can use top or ps commands to check the CPU usage.
using top : This will show you the cpu stats
top -b -n 1 |grep ^Cpu
using ps: This will show you the % cpu usage for each process.
ps -eo pcpu,pid,user,args | sort -r -k1 | less
Also, you can write a small script in bash or perl to read /proc/stat and calculate the CPU usage.
The command uptime gives you load averages for the past 1, 5, and 15 minutes.
Try this command:
$ top
http://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html
Try this command:
cat /proc/stat
This will be something like this:
cpu 55366 271 17283 75381807 22953 13468 94542 0
cpu0 3374 0 2187 9462432 1393 2 665 0
cpu1 2074 12 1314 9459589 841 2 43 0
cpu2 1664 0 1109 9447191 666 1 571 0
cpu3 864 0 716 9429250 387 2 118 0
cpu4 27667 110 5553 9358851 13900 2598 21784 0
cpu5 16625 146 2861 9388654 4556 4026 24979 0
cpu6 1790 0 1836 9436782 480 3307 19623 0
cpu7 1306 0 1702 9399053 726 3529 26756 0
intr 4421041070 559 10 0 4 5 0 0 0 26 0 0 0 111 0 129692 0 0 0 0 0 95 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 369 91027 1580921706 1277926101 570026630 991666971 0 277768 0 0 0 0 0 0 0 0 0 0 0 0 0
ctxt 8097121
btime 1251365089
processes 63692
procs_running 2
procs_blocked 0
More details:
http://www.mail-archive.com/linuxkernelnewbies#googlegroups.com/msg01690.html
http://www.linuxhowtos.org/System/procstat.htm
Maybe something like this
ps -eo pid,pcpu,comm
And if you like to parse and maybe only look at some processes.
#!/bin/sh
ps -eo pid,pcpu,comm | awk '{if ($2 > 4) print }' >> ~/ps_eo_test.txt
You need to sample the load average for several seconds and calculate the CPU utilization from that. If unsure what to you, get the sources of "top" and read it.
Related
I am trying to parse a .log file using python for the Status information about processes of a linux system. The .log file has a lot of different sections of information, the sections of interest start with "##START-ALLPROCESSES-xxxxxxxx" where x is the epoch date and end with '##END-ALLPROCESSES-xxxxxxx". After this line each process is listed with 52 columns each, the number of processes may change depending on the info recorded at the time, and there may be multiple sections with this information at different times.
The idea is to open the .log file, find the sections and then use the XXXXXXX as the key for a nested dictionary where the keys are the predefined column dates filled in with the values from the section, and do this for all different sections that would be found on the .log fie. The nested dictionary would look something like below
[date1-XXXXXX:
[ columnName1: process1,
.
.
.
columnName52: info1
],
.
.
.
[ columnName1: process52,
.
.
.
columName52: info52
]
],
[date2-XXXXXX:
[ columnName1: process1,
.
.
.
columnName52: info1
],
.
.
.
[ columnName1: process52,
.
.
.
columName52: info52
]
]
The data in the .log file looks as follow and would have multiple sections as this but with a different date each line starts with the process id and (process name)
##START-ALLPROCESSES-1676652419
1 (systemd) S 0 1 1 0 -1 4210944 2070278 9743969773 2070 2703 8811 11984 7638026 9190549 20 0 1 0 0 160043008 745 18446744073709551615 187650352414720 187650353516788 281474853505456 0 0 0 671173123 4096 1260 1 0 0 17 0 0 0 2706 0 0 187650353585800 187650353845340 187651263758336 281474853506734 281474853506745 281474853506745 281474853507053 0
10 (rcu_bh) I 2 0 0 0 -1 2129984 0 0 0 0 0 0 0 0 20 0 1 0 2 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10251 (kworker/1:2) I 2 0 0 0 -1 69238880 0 0 0 0 0 914 0 0 20 0 1 0 617684776 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 1 0 0 0 0 0 0 0 0 0 0 0 0 0
10299 (loop2) S 2 0 0 0 -1 3178560 0 0 0 0 0 24 0 0 0 -20 1 0 10871 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 169 0 0 0 0 0 0 0 0 0 0
10648 (kworker/2:0) I 2 0 0 0 -1 69238880 0 0 0 0 0 567 0 0 20 0 1 0 663634994 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 2 0 0 0 0 0 0 0 0 0 0 0 0 0
1082 (nvme-wq) I 2 0 0 0 -1 69238880 0 0 0 0 0 0 0 0 0 -20 1 0 109 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 3 0 0 0 0 0 0 0 0 0 0 0 0 0
1095 (scsi_eh_0) S 2 0 0 0 -1 2129984 0 0 0 0 0 0 0 0 20 0 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 3 0 0 0 0 0 0 0 0 0 0 0 0 0
1096 (scsi_tmf_0) I 2 0 0 0 -1 69238880 0 0 0 0 0 0 0 0 0 -20 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1099 (scsi_eh_1) S 2 0 0 0 -1 2129984 0 0 0 0 0 0 0 0 20 0 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 (migration/0) S 2 0 0 0 -1 69238848 0 0 0 0 0 4961 0 0 -100 0 1 0 2 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 99 1 0 0 0 0 0 0 0 0 0 0 0
1100 (scsi_tmf_1) I 2 0 0 0 -1 69238880 0 0 0 0 0 0 0 0 0 -20 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
##END-ALLPROCESSES-1676652419
I have tried it multiple ways but I cannot seem to get it to go correctly, my last attempt
columns = ['pid', 'comm', 'state', 'ppid', 'pgrp', 'session', 'tty_nr', 'tpgid', 'flags', 'minflt', 'cminflt', 'majflt', 'cmajflt', 'utime', 'stime',
'cutime', 'cstime', 'priority', 'nice', 'num_threads', 'itrealvalue', 'starttime', 'vsize', 'rss', 'rsslim', 'startcode', 'endcode', 'startstack', 'kstkesp',
'kstkeip', 'signal', 'blocked', 'sigignore', 'sigcatch', 'wchan', 'nswap', 'cnswap', 'exit_signal', 'processor', 'rt_priority', 'policy', 'delayacct_blkio_ticks',
'guest_time', 'cguest_time', 'start_data', 'end_data', 'start_brk', 'arg_start', 'arg_end', 'env_start', 'env_end', 'exit_code' ]
for file in os.listdir(dir):
if file.endswith('.log'):
with open(file, 'r') as f:
data = f.read()
data = data.split('##START-ALLPROCESSES-')
data = data[1:]
for i in range(len(data)):
data[i] = data[i].split('##END-ALLPROCESSES-')
data[i] = data[i][0]
data[i] = re.split('\r', data[i])
data[i] = data[i][0]
data[i] = re.split('\n', data[i])
for j in range(len(data[i])):
data[i][j] = re.split('\s+', data[i][j])
#print(data[i])
data[i][0] = str(data[i][0])
data_dict = {}
for i in range(len(data)):
data_dict[data[i][0]] = {}
for j in range(len(columns)):
data_dict[data[i][0]][columns[j]] = data[i][j+1]
print(data_dict)
I converted the epoch date into a str as I was getting unhashable list errors, however that made it so the epoch date shows as a key but each column now has the entire list for the 52 columms of information as a single one, so definitely I am missing something
To solve this problem, you could follow the following steps:
Open the .log file and read the contents
Search for all the sections of interest by finding lines that start with "##START-ALLPROCESSES-" and end with "##END-ALLPROCESSES-"
For each section found, extract the epoch date and create a dictionary with an empty list for each of the 52 columns
Iterate over the lines within the section and split the line into the 52 columns using space as a separator. Add the values to the corresponding list in the dictionary created in step 3
Repeat steps 3 and 4 for all the sections found in the .log file
Return the final nested dictionary
Here is some sample code that implements these steps:
import re
def parse_log_file(log_file_path):
with open(log_file_path, 'r') as log_file:
log_contents = log_file.read()
sections = re.findall(r'##START-ALLPROCESSES-(.*?)##END-ALLPROCESSES-', log_contents, re.DOTALL)
nested_dict = {}
for section in sections:
lines = section.strip().split('\n')
epoch_date = lines[0].split('-')[-1]
column_names = ['column{}'.format(i) for i in range(1, 53)]
section_dict = {column_name: [] for column_name in column_names}
for line in lines[1:]:
values = line.strip().split()
for i, value in enumerate(values):
section_dict[column_names[i]].append(value)
nested_dict['date{}-{}'.format(epoch_date, len(section_dict['column1']))] = section_dict
return nested_dict
You can call this function by passing the path to the .log file as an argument. The function returns the nested dictionary described in the problem statement.
I'm needing to separate a row into multiple columns, for a previous post was able to separate that, but some of the rows are empty and because of that, I get this error:
ValueError: Index contains duplicate entries, cannot reshape
here is a sample dataset to mock up this issue:
myData = [['Abc: 9.22 Mno: 6.90 IExplorer 0.00 OCa: 0.00 Foo: 0.00'],
['Abc: 0.61 Mno: 0.14'],
[''],
['MCheese: (37.20) dimes: (186.02) Feria: (1,586.02)'],
['Abc: 16.76 Mno: 4.25 OMG: 63.19'],
['yonka: 19.27'],
['Dome: (552.23)'],
['Fray: 2,584.96'],
['CC: (83.31)'],
[''],
['Abc: 307.34 Mno: 18.40 Feria: 509.67'],
['IExplorer: 26.28 OCa: 26.28 Foo: 730.68'],
['Abc: 122.66 Mno: 11.85 Feria: 213.24'],
[''],
['Wonka: (13.67) Fray: (1,922.48)'],
['Mno: 18.19 IExplorer: 0.00 OCa: 0.00 Foo: 0.00'],
['Abc: 74.06 Mno: 12.34 Feria: 124.42 MCheese: (4.07)'],
[''],
['Abc: 45.96 Mno: 18.98 IExplorer: 0.00 OCa: 0.00 Foo: 0.00'],
['IExplorer: 0.00 OCa: 0.00 Dome: (166.35) Foo: 0.00'],
['']]
df7 = pd.DataFrame(myData)
df7.columns = ['Original']
df7['Original'] = df7['Original'].str.replace(" ","")
df7['Original']
after separating the columns with a regex from a previous post I get results:
df8 = df7['Original'].str.extractall(r'^(.*?):([\(\)(\,)0-9.]+)').reset_index().fillna(0)
df8 = df8.pivot(index='level_0', columns=0, values=1).rename_axis(index=None, columns=None).fillna(0)
df8
this gives me this result:
Abc CC Dome Fries IExplorer MCheese Mno Wonka yonka
0 9.22 0 0 0 0 0 0 0 0
1 0.61 0 0 0 0 0 0 0 0
3 0 0 0 0 0 (37.20) 0 0 0
4 16.76 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 19.27
6 0 0 (552.23) 0 0 0 0 0 0
7 0 0 0 2,584.96 0 0 0 0 0
8 0 (83.31) 0 0 0 0 0 0 0
10 307.34 0 0 0 0 0 0 0 0
11 0 0 0 0 26.28 0 0 0 0
12 122.66 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 (13.67) 0
15 0 0 0 0 0 0 18.19 0 0
16 74.06 0 0 0 0 0 0 0 0
18 45.96 0 0 0 0 0 0 0 0
19 0 0 0 0 0.00 0 0 0 0
if I change the regex the number of columns increase but I do not get the entirety of the dataset. This second part for this particular sample gives me more columns with this last snippet code.
df8 = df7['Original'].str.extractall(r'(.*?):([\(\)(\,)0-9.]+)').reset_index().fillna(0)
df8 = df8.pivot(index='level_0', columns=0, values=1).rename_axis(index=None, columns=None).fillna(0)
df8
Although, in my particular case the first line gives me more columns than the second one. However none of them count the empty rows.
Is there any way I can count those empty rows within the dateset whenever it finds an empty row? in total there 21 rows, I can only get to 19 shown and count.
We can use str.findall to find all the matching occurrences of regex pattern from each row, then map the occurrences to dict and create a new dataframe. This approach will avoid re-indexing the dataframe. Further you also have to fix your regex pattern to properly capture matching pairs.
s = df7['Original'].str.findall(r'([^:0-9]+):\(?([0-9.,]+)\)?')
df_out = pd.DataFrame(map(dict, s), index=s.index).fillna(0)
>>> df_out
Abc Mno OCa Foo MCheese dimes Feria OMG yonka Dome Fray CC IExplorer Wonka
0 9.22 6.90 0.00 0.00 0 0 0 0 0 0 0 0 0 0
1 0.61 0.14 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 37.20 186.02 1,586.02 0 0 0 0 0 0 0
4 16.76 4.25 0 0 0 0 0 63.19 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 19.27 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 552.23 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 2,584.96 0 0 0
8 0 0 0 0 0 0 0 0 0 0 0 83.31 0 0
9 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 307.34 18.40 0 0 0 0 509.67 0 0 0 0 0 0 0
11 0 0 26.28 730.68 0 0 0 0 0 0 0 0 26.28 0
12 122.66 11.85 0 0 0 0 213.24 0 0 0 0 0 0 0
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 0 0 0 1,922.48 0 0 13.67
15 0 18.19 0.00 0.00 0 0 0 0 0 0 0 0 0.00 0
16 74.06 12.34 0 0 4.07 0 124.42 0 0 0 0 0 0 0
17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
18 45.96 18.98 0.00 0.00 0 0 0 0 0 0 0 0 0.00 0
19 0 0 0.00 0.00 0 0 0 0 0 166.35 0 0 0.00 0
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0
How is this so complex or am I missing something. I simply want to get the integer value of a substring within an existing string and place it in a variable - same as php strpos.
The closest I have found is:
echo $haystack | awk '{print index($0,"<tagtosearch>")}';
Tried
myvar=$($haystack | awk '{print index($0,"<tagtosearch>")}');
but says command not found
The application is to automate include custom bash scripts on a given linux box, but not overwrite existing. Therefore I decided to insert custom start and end tags to denote the custom section. So I simply expected to get the start and end positions, delete this part of the file and pull in a latest version.
So this is as far as I have that function:
function install-env(){
mkdir -p /etc/datadimension/tmp;
cd /etc/datadimension/tmp;
cp /etc/bash.bashrc tempbash.bashrc;
newbash=$(cat tempbash.bashrc);
echo "$newbash" > newbash.bashrc;
insertstart=$(echo "$newbash" | awk '{print index($0,"<starttag>")}');
echo $insertstart;
}
Output
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Do I need to apt install something that can handle strings - this seems really basic requirement that is lacking.
With $($haystack), you invoke the value assigned to $haystack as a command.
To pipe $haystack value to awk try this:
myvar=$(echo "$haystack" | awk '{print index($0,"<tagtosearch>")}');
Or as #anubhava suggests in comment, using a here string:
myvar=$(awk '{print index($0, "<tagtosearch>")}' <<< "$haystack")
Could not find anything acceptable aside from php handling
If you want to print position of a search word in file then you can use this awk command:
awk -v kw='<starttag>' 'p=index($0, kw){p+=b; exit} {b+=length($0)+1} END{print p}' file
We keep adding each line's length into a running variable b until index retuns greater than 0. At that time we add current line's index into b and exit.
This one seems to work like php strpos() function
strpos() {
local string=$1
local findme=$2
awk -v x=$string -v y=$findme '{print index($0, y)}' <<< $string
}
Some tests
strpos azerty a
1
strpos azerty t
5
strpos azerty er
3
strpos azerty/uio /
7
strpos "azerty uio" " "
7
EDIT:
If you want exactly the php behaviour you can do:
php_strpos() {
local haystack="$1"
local needle="$2"
local offset=${3:-0}
php -r "echo strpos('$haystack', '$needle', $offset);"
}
Using the histogram function of gdalinfo, I am saving the frequency of pixel values in a textfile. My objective is to extract the first and last value of the histogram and save them in a variable. Since I am new the Linux environment, I don't know how to use grep to select the numbers by their position.
13691313 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 24599
Extracting the first and last field with awk:
awk '{ print $1, $NF }' filename
Or, if your histogram is stored in a string, you can use a here-string:
awk '{ print $1, $NF }' <<< "$stringname"
If you'd like to assign them separately to shell variables:
$ var1="$(awk '{ print $1 }' filename)"
$ var2="$(awk '{ print $NF }' filename)"
If the string does not change, ie. same amount of space you can use
echo "your string" | cut -d " " -f 1,256
And cut should show
13691313 24599
You can use ^ and $ to anchor the grep expression at the beginning or end:
echo "your string" | grep -oE '(^[0-9]+)|([0-9]+$)'
I am running/testing the openlinksw.com Virtuoso database server and am noticing something odd – there appears to be no transaction logging.
Likely there is a switch/parm that needs to be set to enable logging of the individual commits/transactions, but I have not found in the documentation.
I have a script that looks like this:
set autocommit manual;
ttlp(file_to_string_output ('./events.nq/00000000'), 'bruce', 'bob', 512);
commit work;
ttlp(file_to_string_output ('./events.nq/00000001'), 'bruce', 'bob', 512);
commit work;
<repeat ttlp/commit pair many times>
Each of the 19,978 input files contains 50 quads.
I ran:
bin/isql < script.sql
and while it was running, I ran 'vmstat 1'. The script takes about 4
minutes to finish, which gives a rate of about 83 commits per second.
However, vmstat's 'bo' (blocks out) column only occassionally showed
disk i/o. Most of the time, 'bo' was zero, with occassional bursts of
activity. I would expect that, for each commit to be durable, there
would have to be at least a small bit of i/o per commit for
transaction logging. Am I doing something wrong? I'm using the
default database parameters.
Example vmstat 1 output:
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 113900 34730612 527836 11647812 0 0 0 0 4024 2928 6 1 94 0 0
1 0 113900 34729992 527840 11647876 0 0 0 36 4035 2727 6 0 93 1 0
2 0 113900 34729392 527840 11648440 0 0 0 0 3799 2612 6 1 94 0 0
1 0 113900 34728896 527840 11649004 0 0 0 0 3814 2693 6 0 94 0 0
1 0 113900 34724100 527840 11649556 0 0 0 0 3775 2653 6 1 94 0 0
1 0 113900 34723008 527840 11650128 0 0 0 8 3696 2838 6 0 94 0 0
1 0 113900 34722512 527844 11650708 0 0 0 16 3594 2996 6 0 93 0 0
1 0 113900 34721892 527844 11651868 0 0 0 0 4073 3066 6 0 94 0 0
1 0 113900 34721272 527844 11652488 0 0 0 0 4175 3077 6 1 94 0 0
1 0 113900 34721024 527844 11652568 0 0 0 5912 3744 2929 6 1 94 0 0
1 0 113900 34719540 527844 11653696 0 0 0 60 3786 3143 6 1 93 0 0
1 0 113900 34719044 527844 11653772 0 0 0 32 3809 2911 6 1 94 0 0
1 0 113900 34718052 527844 11654396 0 0 0 0 3963 2842 6 1 94 0 0
1 0 113900 34717060 527844 11654988 0 0 0 0 3956 2904 6 1 94 0 0
1 0 113900 34714748 527844 11656140 0 0 0 0 3920 2928 6 1 94 0 0
1 0 113900 34714144 527844 11656212 0 0 0 4 4059 2984 6 1 93 1 0
1 0 113900 34713656 527848 11657360 0 0 0 16 3945 2908 6 1 94 0 0
1 0 113900 34712540 527848 11657972 0 0 0 0 3978 2984 6 1 93 0 0
1 0 113900 34712044 527848 11658052 0 0 0 0 3758 2889 6 1 94 0 0
1 0 113900 34711088 527848 11658640 0 0 0 0 3643 2712 6 1 94 0 0
1 0 113900 34710468 527848 11659224 0 0 0 0 3763 2812 6 1 94 0 0
Running on
Version: 7.1
64-bit linux