I am running/testing the openlinksw.com Virtuoso database server and am noticing something odd – there appears to be no transaction logging.
Likely there is a switch/parm that needs to be set to enable logging of the individual commits/transactions, but I have not found in the documentation.
I have a script that looks like this:
set autocommit manual;
ttlp(file_to_string_output ('./events.nq/00000000'), 'bruce', 'bob', 512);
commit work;
ttlp(file_to_string_output ('./events.nq/00000001'), 'bruce', 'bob', 512);
commit work;
<repeat ttlp/commit pair many times>
Each of the 19,978 input files contains 50 quads.
I ran:
bin/isql < script.sql
and while it was running, I ran 'vmstat 1'. The script takes about 4
minutes to finish, which gives a rate of about 83 commits per second.
However, vmstat's 'bo' (blocks out) column only occassionally showed
disk i/o. Most of the time, 'bo' was zero, with occassional bursts of
activity. I would expect that, for each commit to be durable, there
would have to be at least a small bit of i/o per commit for
transaction logging. Am I doing something wrong? I'm using the
default database parameters.
Example vmstat 1 output:
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 113900 34730612 527836 11647812 0 0 0 0 4024 2928 6 1 94 0 0
1 0 113900 34729992 527840 11647876 0 0 0 36 4035 2727 6 0 93 1 0
2 0 113900 34729392 527840 11648440 0 0 0 0 3799 2612 6 1 94 0 0
1 0 113900 34728896 527840 11649004 0 0 0 0 3814 2693 6 0 94 0 0
1 0 113900 34724100 527840 11649556 0 0 0 0 3775 2653 6 1 94 0 0
1 0 113900 34723008 527840 11650128 0 0 0 8 3696 2838 6 0 94 0 0
1 0 113900 34722512 527844 11650708 0 0 0 16 3594 2996 6 0 93 0 0
1 0 113900 34721892 527844 11651868 0 0 0 0 4073 3066 6 0 94 0 0
1 0 113900 34721272 527844 11652488 0 0 0 0 4175 3077 6 1 94 0 0
1 0 113900 34721024 527844 11652568 0 0 0 5912 3744 2929 6 1 94 0 0
1 0 113900 34719540 527844 11653696 0 0 0 60 3786 3143 6 1 93 0 0
1 0 113900 34719044 527844 11653772 0 0 0 32 3809 2911 6 1 94 0 0
1 0 113900 34718052 527844 11654396 0 0 0 0 3963 2842 6 1 94 0 0
1 0 113900 34717060 527844 11654988 0 0 0 0 3956 2904 6 1 94 0 0
1 0 113900 34714748 527844 11656140 0 0 0 0 3920 2928 6 1 94 0 0
1 0 113900 34714144 527844 11656212 0 0 0 4 4059 2984 6 1 93 1 0
1 0 113900 34713656 527848 11657360 0 0 0 16 3945 2908 6 1 94 0 0
1 0 113900 34712540 527848 11657972 0 0 0 0 3978 2984 6 1 93 0 0
1 0 113900 34712044 527848 11658052 0 0 0 0 3758 2889 6 1 94 0 0
1 0 113900 34711088 527848 11658640 0 0 0 0 3643 2712 6 1 94 0 0
1 0 113900 34710468 527848 11659224 0 0 0 0 3763 2812 6 1 94 0 0
Running on
Version: 7.1
64-bit linux
Related
I am trying to parse a .log file using python for the Status information about processes of a linux system. The .log file has a lot of different sections of information, the sections of interest start with "##START-ALLPROCESSES-xxxxxxxx" where x is the epoch date and end with '##END-ALLPROCESSES-xxxxxxx". After this line each process is listed with 52 columns each, the number of processes may change depending on the info recorded at the time, and there may be multiple sections with this information at different times.
The idea is to open the .log file, find the sections and then use the XXXXXXX as the key for a nested dictionary where the keys are the predefined column dates filled in with the values from the section, and do this for all different sections that would be found on the .log fie. The nested dictionary would look something like below
[date1-XXXXXX:
[ columnName1: process1,
.
.
.
columnName52: info1
],
.
.
.
[ columnName1: process52,
.
.
.
columName52: info52
]
],
[date2-XXXXXX:
[ columnName1: process1,
.
.
.
columnName52: info1
],
.
.
.
[ columnName1: process52,
.
.
.
columName52: info52
]
]
The data in the .log file looks as follow and would have multiple sections as this but with a different date each line starts with the process id and (process name)
##START-ALLPROCESSES-1676652419
1 (systemd) S 0 1 1 0 -1 4210944 2070278 9743969773 2070 2703 8811 11984 7638026 9190549 20 0 1 0 0 160043008 745 18446744073709551615 187650352414720 187650353516788 281474853505456 0 0 0 671173123 4096 1260 1 0 0 17 0 0 0 2706 0 0 187650353585800 187650353845340 187651263758336 281474853506734 281474853506745 281474853506745 281474853507053 0
10 (rcu_bh) I 2 0 0 0 -1 2129984 0 0 0 0 0 0 0 0 20 0 1 0 2 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10251 (kworker/1:2) I 2 0 0 0 -1 69238880 0 0 0 0 0 914 0 0 20 0 1 0 617684776 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 1 0 0 0 0 0 0 0 0 0 0 0 0 0
10299 (loop2) S 2 0 0 0 -1 3178560 0 0 0 0 0 24 0 0 0 -20 1 0 10871 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 169 0 0 0 0 0 0 0 0 0 0
10648 (kworker/2:0) I 2 0 0 0 -1 69238880 0 0 0 0 0 567 0 0 20 0 1 0 663634994 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 2 0 0 0 0 0 0 0 0 0 0 0 0 0
1082 (nvme-wq) I 2 0 0 0 -1 69238880 0 0 0 0 0 0 0 0 0 -20 1 0 109 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 3 0 0 0 0 0 0 0 0 0 0 0 0 0
1095 (scsi_eh_0) S 2 0 0 0 -1 2129984 0 0 0 0 0 0 0 0 20 0 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 3 0 0 0 0 0 0 0 0 0 0 0 0 0
1096 (scsi_tmf_0) I 2 0 0 0 -1 69238880 0 0 0 0 0 0 0 0 0 -20 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1099 (scsi_eh_1) S 2 0 0 0 -1 2129984 0 0 0 0 0 0 0 0 20 0 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 (migration/0) S 2 0 0 0 -1 69238848 0 0 0 0 0 4961 0 0 -100 0 1 0 2 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 99 1 0 0 0 0 0 0 0 0 0 0 0
1100 (scsi_tmf_1) I 2 0 0 0 -1 69238880 0 0 0 0 0 0 0 0 0 -20 1 0 110 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 1 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
##END-ALLPROCESSES-1676652419
I have tried it multiple ways but I cannot seem to get it to go correctly, my last attempt
columns = ['pid', 'comm', 'state', 'ppid', 'pgrp', 'session', 'tty_nr', 'tpgid', 'flags', 'minflt', 'cminflt', 'majflt', 'cmajflt', 'utime', 'stime',
'cutime', 'cstime', 'priority', 'nice', 'num_threads', 'itrealvalue', 'starttime', 'vsize', 'rss', 'rsslim', 'startcode', 'endcode', 'startstack', 'kstkesp',
'kstkeip', 'signal', 'blocked', 'sigignore', 'sigcatch', 'wchan', 'nswap', 'cnswap', 'exit_signal', 'processor', 'rt_priority', 'policy', 'delayacct_blkio_ticks',
'guest_time', 'cguest_time', 'start_data', 'end_data', 'start_brk', 'arg_start', 'arg_end', 'env_start', 'env_end', 'exit_code' ]
for file in os.listdir(dir):
if file.endswith('.log'):
with open(file, 'r') as f:
data = f.read()
data = data.split('##START-ALLPROCESSES-')
data = data[1:]
for i in range(len(data)):
data[i] = data[i].split('##END-ALLPROCESSES-')
data[i] = data[i][0]
data[i] = re.split('\r', data[i])
data[i] = data[i][0]
data[i] = re.split('\n', data[i])
for j in range(len(data[i])):
data[i][j] = re.split('\s+', data[i][j])
#print(data[i])
data[i][0] = str(data[i][0])
data_dict = {}
for i in range(len(data)):
data_dict[data[i][0]] = {}
for j in range(len(columns)):
data_dict[data[i][0]][columns[j]] = data[i][j+1]
print(data_dict)
I converted the epoch date into a str as I was getting unhashable list errors, however that made it so the epoch date shows as a key but each column now has the entire list for the 52 columms of information as a single one, so definitely I am missing something
To solve this problem, you could follow the following steps:
Open the .log file and read the contents
Search for all the sections of interest by finding lines that start with "##START-ALLPROCESSES-" and end with "##END-ALLPROCESSES-"
For each section found, extract the epoch date and create a dictionary with an empty list for each of the 52 columns
Iterate over the lines within the section and split the line into the 52 columns using space as a separator. Add the values to the corresponding list in the dictionary created in step 3
Repeat steps 3 and 4 for all the sections found in the .log file
Return the final nested dictionary
Here is some sample code that implements these steps:
import re
def parse_log_file(log_file_path):
with open(log_file_path, 'r') as log_file:
log_contents = log_file.read()
sections = re.findall(r'##START-ALLPROCESSES-(.*?)##END-ALLPROCESSES-', log_contents, re.DOTALL)
nested_dict = {}
for section in sections:
lines = section.strip().split('\n')
epoch_date = lines[0].split('-')[-1]
column_names = ['column{}'.format(i) for i in range(1, 53)]
section_dict = {column_name: [] for column_name in column_names}
for line in lines[1:]:
values = line.strip().split()
for i, value in enumerate(values):
section_dict[column_names[i]].append(value)
nested_dict['date{}-{}'.format(epoch_date, len(section_dict['column1']))] = section_dict
return nested_dict
You can call this function by passing the path to the .log file as an argument. The function returns the nested dictionary described in the problem statement.
I'm needing to separate a row into multiple columns, for a previous post was able to separate that, but some of the rows are empty and because of that, I get this error:
ValueError: Index contains duplicate entries, cannot reshape
here is a sample dataset to mock up this issue:
myData = [['Abc: 9.22 Mno: 6.90 IExplorer 0.00 OCa: 0.00 Foo: 0.00'],
['Abc: 0.61 Mno: 0.14'],
[''],
['MCheese: (37.20) dimes: (186.02) Feria: (1,586.02)'],
['Abc: 16.76 Mno: 4.25 OMG: 63.19'],
['yonka: 19.27'],
['Dome: (552.23)'],
['Fray: 2,584.96'],
['CC: (83.31)'],
[''],
['Abc: 307.34 Mno: 18.40 Feria: 509.67'],
['IExplorer: 26.28 OCa: 26.28 Foo: 730.68'],
['Abc: 122.66 Mno: 11.85 Feria: 213.24'],
[''],
['Wonka: (13.67) Fray: (1,922.48)'],
['Mno: 18.19 IExplorer: 0.00 OCa: 0.00 Foo: 0.00'],
['Abc: 74.06 Mno: 12.34 Feria: 124.42 MCheese: (4.07)'],
[''],
['Abc: 45.96 Mno: 18.98 IExplorer: 0.00 OCa: 0.00 Foo: 0.00'],
['IExplorer: 0.00 OCa: 0.00 Dome: (166.35) Foo: 0.00'],
['']]
df7 = pd.DataFrame(myData)
df7.columns = ['Original']
df7['Original'] = df7['Original'].str.replace(" ","")
df7['Original']
after separating the columns with a regex from a previous post I get results:
df8 = df7['Original'].str.extractall(r'^(.*?):([\(\)(\,)0-9.]+)').reset_index().fillna(0)
df8 = df8.pivot(index='level_0', columns=0, values=1).rename_axis(index=None, columns=None).fillna(0)
df8
this gives me this result:
Abc CC Dome Fries IExplorer MCheese Mno Wonka yonka
0 9.22 0 0 0 0 0 0 0 0
1 0.61 0 0 0 0 0 0 0 0
3 0 0 0 0 0 (37.20) 0 0 0
4 16.76 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 19.27
6 0 0 (552.23) 0 0 0 0 0 0
7 0 0 0 2,584.96 0 0 0 0 0
8 0 (83.31) 0 0 0 0 0 0 0
10 307.34 0 0 0 0 0 0 0 0
11 0 0 0 0 26.28 0 0 0 0
12 122.66 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 (13.67) 0
15 0 0 0 0 0 0 18.19 0 0
16 74.06 0 0 0 0 0 0 0 0
18 45.96 0 0 0 0 0 0 0 0
19 0 0 0 0 0.00 0 0 0 0
if I change the regex the number of columns increase but I do not get the entirety of the dataset. This second part for this particular sample gives me more columns with this last snippet code.
df8 = df7['Original'].str.extractall(r'(.*?):([\(\)(\,)0-9.]+)').reset_index().fillna(0)
df8 = df8.pivot(index='level_0', columns=0, values=1).rename_axis(index=None, columns=None).fillna(0)
df8
Although, in my particular case the first line gives me more columns than the second one. However none of them count the empty rows.
Is there any way I can count those empty rows within the dateset whenever it finds an empty row? in total there 21 rows, I can only get to 19 shown and count.
We can use str.findall to find all the matching occurrences of regex pattern from each row, then map the occurrences to dict and create a new dataframe. This approach will avoid re-indexing the dataframe. Further you also have to fix your regex pattern to properly capture matching pairs.
s = df7['Original'].str.findall(r'([^:0-9]+):\(?([0-9.,]+)\)?')
df_out = pd.DataFrame(map(dict, s), index=s.index).fillna(0)
>>> df_out
Abc Mno OCa Foo MCheese dimes Feria OMG yonka Dome Fray CC IExplorer Wonka
0 9.22 6.90 0.00 0.00 0 0 0 0 0 0 0 0 0 0
1 0.61 0.14 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 37.20 186.02 1,586.02 0 0 0 0 0 0 0
4 16.76 4.25 0 0 0 0 0 63.19 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 19.27 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 552.23 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 2,584.96 0 0 0
8 0 0 0 0 0 0 0 0 0 0 0 83.31 0 0
9 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 307.34 18.40 0 0 0 0 509.67 0 0 0 0 0 0 0
11 0 0 26.28 730.68 0 0 0 0 0 0 0 0 26.28 0
12 122.66 11.85 0 0 0 0 213.24 0 0 0 0 0 0 0
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 0 0 0 1,922.48 0 0 13.67
15 0 18.19 0.00 0.00 0 0 0 0 0 0 0 0 0.00 0
16 74.06 12.34 0 0 4.07 0 124.42 0 0 0 0 0 0 0
17 0 0 0 0 0 0 0 0 0 0 0 0 0 0
18 45.96 18.98 0.00 0.00 0 0 0 0 0 0 0 0 0.00 0
19 0 0 0.00 0.00 0 0 0 0 0 166.35 0 0 0.00 0
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I have a data in formatted in 10-columns as follow:
# col1 col2 col3 col4 col5 col6 col7 col8 col9 col10
1 2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
I want to plot all the data individually as a single column,that is the 11th data will be 11 and so on. How can I do this in gnuplot directly?
example of the data can be obtained here: data
This is a bit a special data format. Well, you could rearrange it with whatever tools, but you can also rearrange with gnuplot only.
For this, you need to have the data in a datablock. How to get it from a file into a datablock see the answer to this question: gnuplot: load datafile 1:1 into datablock
Code:
### plot special dataformat
reset session
$Data <<EOD
#SP# 12 10511.100
265 7 2 5 2 10 6 10 4 4
8 8 4 8 8 7 17 16 12 17
9 23 18 16 18 26 18 31 31 38
35 58 48 95 107 156 161 199 282 398
448 704 851 1127 1399 1807 2272 2724 3376 4077
4903 6458 7158 9045 9279 12018 13765 14212 17397 19166
21159 23650 25537 28003 29645 35385 34328 36021 42720 39998
45825 48111 49548 46591 53471 53888 56166 61747 57867 59226
59888 65953 61544 68233 68770 69336 63925 69660 69781 70590
76419 70791 70411 75909 70082 76136 69906 75069 75168 74690
73897 73656 73134 77603 70795 77603 68092 74208 73385 66906
71924 70866 74408 67869 67703 70924 65004 68566 62694 65917
64636 62988 62372 64923 59231 58266 60636 59191 54090 56428
55222 53519 52724 53973 49649 51418 46858 48289 46800 45395
44235 43087 40999 42777 39129 40020 37985 37019 35739 34925
33344 33968 30874 31292 30141 29528 27956 27001 25712 25842
23857 23752 22900 21926 20853 19897 19063 18997 18345 16499
16631 15810 15793 14158 13609 13429 13022 12276 11579 10810
10930 9743 9601 8939 8762 8338 7723 7470 6815 6774
6342 6056 5939 5386 5264 4889 4600 4380 4151 3982
3579 3557 3335 3220 3030 2763 2769 2516 2409 2329
2310 2153 2122 1948 1813 1879 1671 1666 1622 1531
1584 1455 1430 1409 1345 1291 1300 1284 1373 1261
1189 1373 1258 1220 1134 1261 1213 1116 1288 1087
1113 1137 1182 1087 1213 1061 1132 1211 1004 1081
1130 1144 1208 1089 1114 1088 1116 1188 1137 1150
1216 1101 1092 1148 1115 1161 1262 1157 1206 1183
1177 1274 1203 1150 1161 1206 1215 1166 1248 1217
1212 1250 1239 1292 1226 1262 1209 1329 1178 1383
1219 1175 1265 1264 1361 1206 1266 1285 1189 1284
1330 1223 1325 1338 1250 1322 1256 1252 1353 1269
1278 1281 1349 1256 1326 1309 1262 1374 1303 1293
1350 1297 1262 1144 1305 1224 1259 1292 1447 1187
1342 1267 1197 1327 1189 1248 1250 1198 1290 1299
1233 1173 1327 1206 1231 1205 1182 1232 1233 1158
1193 1137 1180 1211 1196 1176 1096 1131 1086 1134
1125 1122 1090 1145 1053 1067 1097 1003 1044 993
1056 1006 915 959 923 943 1026 930 927 929
914 849 920 818 808 888 877 808 848 867
735 785 769 738 744 716 708 677 660 657
589 626 649 581 578 597 580 539 495 541
528 402 457 435 425 417 415 408 366 375
322 341 292 286 272 313 263 255 246 207
213 176 195 180 181 168 153 140 114 130
106 100 97 92 71 71 72 59 57 49
43 42 35 38 36 26 33 29 29 14
22 19 11 11 14 14 6 6 9 4
7 5 2 5 1 3 0 0 0 2
0 1 3 0 2 0 0 0 0 1
1 0 3 1 0 1 2 1 2 0
0 3 0 0 1 0 0 1 2 0
1 0 0 0 0 0 0 1 2 0
2 3 1 0 0 3 1 0 1 0
0 1 0 1 1 0 0 1 0 0
0 1 0 0 1 2 1 2 0 1
1 1 0 0 0 0 0 0 0 2
1 1 0 0 0 1 0 1 0 1
1 1 1 1 1 0 0 3 0 2
1 1 1 0 1 0 1 0 0 2
1 1 0 1 0 0 0 1 0 1
0 0 0 0 2 3 1 2 0 0
1 2 0 1 2 1 1 1 1 1
1 0 1 0 0 2 1 2 2 1
0 0 1 1 1 0 1 1 1 0
0 2 1 1 1 0 0 0 1 1
0 2 1 1 2 0 2 1 1 1
1 1 0 0 0 2 0 0 1 0
1 0 1 1 2 2 0 0 0 3
2 0 0 0 2 0 1 1 0 1
0 0 0 2 1 4 0 1 0 1
2 0 0 0 0 0 1 0 0 2
0 0 0 0 1 0 0 0 0 0
0 1 0 1 0 0 0 0 1 2
0 0 1 0 1 0 0 1 0 1
0 0 2 1 1 0 0 1 0 0
0 0 0 1 0 0 0 0 0 1
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 1 0 0
0 0 0 1 0 0 0 1 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 1
0 0 1 2 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0
EOD
set print $Data2
HeaderLines = 1
Line = ''
do for [i=HeaderLines+1:|$Data|] {
Line = Line.sprintf(' %s',$Data[i][1:strlen($Data[i])-1])
}
print Line
set print
WavStart = 450
WavStep = 0.1
myWav(col) = WavStart + column(col)*WavStep
set grid x,y
plot $Data2 u (myWav(1)):0:3 matrix w l lc "red" notitle
### end of code
Result:
I have some bladefs volume and I just checked /proc/self/mountstats where I see statistics per operations:
...
opts: rw,vers=3,rsize=131072,wsize=131072,namlen=255,acregmin=1800,acregmax=1800,acdirmin=1800,acdirmax=1800,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.2.100,mountvers=3,mountport=903,mountproto=tcp,local_lock=all
age: 18129
caps: caps=0x3fc7,wtmult=512,dtsize=32768,bsize=0,namlen=255
sec: flavor=1,pseudoflavor=1
events: 18840 116049 23 5808 22138 21048 146984 13896 287 2181 0 7560 31380 0 9565 5106 0 6471 0 0 13896 0 0 0 0 0 0
bytes: 339548407 48622919 0 0 311167118 48622919 76846 13896
RPC iostats version: 1.0 p/v: 100003/3 (nfs)
xprt: tcp 875 1 7 0 0 85765 85764 1 206637 0 37 1776 35298
per-op statistics
NULL: 0 0 0 0 0 0 0 0
GETATTR: 18840 18840 0 2336164 2110080 92 8027 8817
SETATTR: 0 0 0 0 0 0 0 0
LOOKUP: 21391 21392 0 3877744 4562876 118 103403 105518
ACCESS: 20183 20188 0 2584304 2421960 72 10122 10850
READLINK: 0 0 0 0 0 0 0 0
READ: 3425 3425 0 465848 311606600 340 97323 97924
WRITE: 2422 2422 0 48975488 387520 763 200645 201522
CREATE: 2616 2616 0 447392 701088 21 870 1088
MKDIR: 858 858 0 188760 229944 8 573 705
SYMLINK: 0 0 0 0 0 0 0 0
MKNOD: 0 0 0 0 0 0 0 0
REMOVE: 47 47 0 6440 6768 0 8 76
RMDIR: 23 23 0 4876 3312 0 3 5
RENAME: 23 23 0 7176 5980 0 5 6
LINK: 0 0 0 0 0 0 0 0
READDIR: 160 160 0 23040 4987464 0 16139 16142
READDIRPLUS: 15703 15703 0 2324044 8493604 43 1041634 1041907
FSSTAT: 1 1 0 124 168 0 0 0
FSINFO: 2 2 0 248 328 0 0 0
PATHCONF: 1 1 0 124 140 0 0 0
COMMIT: 68 68 0 9248 10336 2 272 275...
about my bladefs. I am interested in READ operation statistics. As I know the last column (97924) means:
execute: How long ops of this type take to execute (from
rpc_init_task to rpc_exit_task) (microsecond)
How to interpret this? Is it the average time of each read operation regardless of the block size? I have very strong suspicion that I have problems with NFS: am I right? The value of 0.1 sec looks bad for me, but I am not sure how exactly to interpret this time: average, some sum...?
After reading the kernel source, the statistics are printed from net/sunrpc/stats.c rpc_clnt_show_stats() and the 8th column of per-op statistics statistics seems to printed from _print_rpc_iostats, it's printing struct rpc_iostats member om_execute. (The newest kernel has 9 columns with errors on the last column.)
That member looks to be only referenced/actually changed in rpc_count_iostats_metrics with:
execute = ktime_sub(now, task->tk_start);
op_metrics->om_execute = ktime_add(op_metrics->om_execute, execute);
Assuming ktime_add does what it says, the value of om_execute only increases. So the 8th column of mountstats would be the sum of the time of operations of this type.
I have a matrix, output.dat:
0 0 0 0 0 0 3 7 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 11 16 6 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 7 8 4 16 4 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 2 2 5 11 3 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 3 1 9 10 9 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 12 28 13 11 5 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 6 17 33 14 2 0 5 2 0 0 0 0 0 0 0
0 0 0 0 0 1 13 15 11 6 0 0 5 7 0 0 0 0 0 0
0 0 0 0 0 0 3 3 8 3 0 0 0 3 0 0 0 0 0 0
0 0 0 0 0 0 0 8 8 8 1 2 1 3 2 0 0 0 0 0
0 0 0 0 0 0 0 1 17 10 4 7 4 12 3 0 0 0 0 0
0 0 0 0 0 3 2 3 6 22 9 5 8 5 1 0 0 0 0 0
0 0 0 0 0 1 5 7 10 35 4 6 6 9 4 0 0 0 0 0
0 0 0 0 0 2 12 12 30 52 23 11 8 7 5 1 0 0 0 0
0 0 0 0 1 7 25 16 33 30 26 16 21 19 5 2 0 0 0 0
0 0 0 0 0 0 12 36 19 22 28 19 30 17 9 0 0 0 0 0
0 0 0 0 0 11 18 12 37 32 27 26 33 21 10 12 3 0 0 0
0 0 0 0 0 11 14 23 44 59 45 26 28 9 3 7 0 0 0 0
0 0 0 0 0 0 0 8 19 23 22 11 34 32 25 7 0 0 0 0
0 0 0 0 0 0 0 0 4 8 9 16 21 26 20 11 12 4 6 2
Using this in a bash script results in a perfectly fine looking plot of the matrix:
echo "set terminal png font arial 30 size 1600,1200;
set output 'output.png';set xrange [1:20];set yrange [1:20];set xlabel 'x';set ylabel 'y';
set pm3d map;set pm3d interpolate 0,0;splot 'output.dat' matrix" | gnuplot
However, I'd like the x-axis and y-axis to say "0...1" instead of "1...20". If I simply change the xrange [1:20] to [0:1] no data is plotted. And scaling the data doesn't work. Using xticlabels (at least as I understand it) hasn't successfully changed the axes either.
How can I change the x and y to say "0...1" instead of "1...20"?
I don't know what you tried, but scaling in the using statement works fine:
echo "set terminal pngcairo;set autoscale fix; set tics out nomirror;
set xlabel 'x';set ylabel 'y'; set pm3d map interpolate 0,0;
splot 'output.dat' matrix using (\$1/19.0):(\$2/19.0):3" | gnuplot > output.png