When I run below curl upload command it gives me big output. How can I cut short this to just show the final upload speed?
curl --upload-file /tmp/testlocal -v -u tu**r:******#*23 http://nexus3-core:8081/nexus3/repository/tes******/tes*****
* Expire in 0 ms for 6 (transfer 0x558e6c881f50)
* Expire in 1 ms for 1 (transfer 0x558e6c881f50)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Expire in 0 ms for 1 (transfer 0x558e6c881f50)
* Expire in 2 ms for 1 (transfer 0x558e6c881f50)
* Expire in 0 ms for 1 (transfer 0x558e6c881f50)
* Expire in 1 ms for 1 (transfer 0x558e6c881f50)
* Expire in 1 ms for 1 (transfer 0x558e6c881f50)
* Expire in 1 ms for 1 (transfer 0x558e6c881f50)
* Trying 172.30.51.207...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x558e6c881f50)
* Connected to nexus3-core (172.30.51.207) port 8081 (#0)
* Server auth using Basic with user 'tu****'
> PUT /nexus3/repository/testu****/tes**** HTTP/1.1
> Host: nexus3-core:8081
> Authorization: Basic dHVzZXI6VHVzZXJAMTIz
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Length: 1048576000
> Expect: 100-continue
>
* Expire in 1000 ms for 0 (transfer 0x558e6c881f50)
< HTTP/1.1 100 Continue
} [41940 bytes data]
4 1000M 0 0 4 45.5M 0 50.7M 0:00:19 --:--:-- 0:00:19 50.6M
8 1000M 0 0 8 89.7M 0 47.3M 0:00:21 0:00:01 0:00:20 47.2M
13 1000M 0 0 13 139M 0 48.2M 0:00:20 0:00:02 0:00:18 48.2M
19 1000M 0 0 19 193M 0 49.6M 0:00:20 0:00:03 0:00:17 49.6M
23 1000M 0 0 23 234M 0 47.9M 0:00:20 0:00:04 0:00:16 47.9M
29 1000M 0 0 29 291M 0 49.4M 0:00:20 0:00:05 0:00:15 49.2M
34 1000M 0 0 34 346M 0 50.1M 0:00:19 0:00:06 0:00:13 51.2M
40 1000M 0 0 40 408M 0 51.7M 0:00:19 0:00:07 0:00:12 53.8M
46 1000M 0 0 46 465M 0 52.3M 0:00:19 0:00:08 0:00:11 54.3M
52 1000M 0 0 52 520M 0 52.5M 0:00:19 0:00:09 0:00:10 57.1M
58 1000M 0 0 58 587M 0 53.9M 0:00:18 0:00:10 0:00:08 59.1M
64 1000M 0 0 64 648M 0 54.4M 0:00:18 0:00:11 0:00:07 60.3M
70 1000M 0 0 70 706M 0 54.7M 0:00:18 0:00:12 0:00:06 59.5M
76 1000M 0 0 76 763M 0 54.9M 0:00:18 0:00:13 0:00:05 59.5M
78 1000M 0 0 78 781M 0 51.2M 0:00:19 0:00:15 0:00:04 48.7M
79 1000M 0 0 79 791M 0 49.7M 0:00:20 0:00:15 0:00:05 40.7M
83 1000M 0 0 83 839M 0 49.6M 0:00:20 0:00:16 0:00:04 38.2M
89 1000M 0 0 89 895M 0 50.0M 0:00:19 0:00:17 0:00:02 37.8M
95 1000M 0 0 95 957M 0 50.6M 0:00:19 0:00:18 0:00:01 38.9M* We are completely uploaded and fine
100 1000M 0 0 100 1000M 0 48.6M 0:00:20 0:00:20 --:--:-- 41.0M< HTTP/1.1 201 Created
< Date: Fri, 08 Jan 2021 06:52:32 GMT
< Server: Nexus/3.23.0-03 (OSS)
< X-Content-Type-Options: nosniff
< Content-Security-Policy: sandbox allow-forms allow-modals allow-popups allow-presentation allow-scripts allow-top-navigation
< X-XSS-Protection: 1; mode=block
< Content-Length: 0
<
100 1000M 0 0 100 1000M 0 47.9M 0:00:20 0:00:20 --:--:-- 41.9M
My desired out put is as below
Curl upload speed : 41.9M
I know curl prints stderr and I am struggling to get that output with grep
Not exactly answering your question but perhaps a better way to get the average upload speed is to use the dedicated option for it? Try this:
curl -w 'Speed: %{speed_upload}\n' -T local-file http://...target...
That -w option string will then output the average upload speed (in bytes/sec) after a successful transfer.
With your shown samples, could you please try following. Couldn't test it(because of the curl command), it should work ok IMHO.
These commands are printing last field of your output's last line.
your_curl_command |
tac |
awk 'FNR==1{print $NF;next}'
OR with-in single awk try:
your_curl_command |
awk '{val=$NF} END{print val}'
If you are always interested only in last line last field, you might combine tail -1 with awk to get it following way:
curl_command | tail -1 | awk '{print $NF}'
Related
I have some bladefs volume and I just checked /proc/self/mountstats where I see statistics per operations:
...
opts: rw,vers=3,rsize=131072,wsize=131072,namlen=255,acregmin=1800,acregmax=1800,acdirmin=1800,acdirmax=1800,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.2.100,mountvers=3,mountport=903,mountproto=tcp,local_lock=all
age: 18129
caps: caps=0x3fc7,wtmult=512,dtsize=32768,bsize=0,namlen=255
sec: flavor=1,pseudoflavor=1
events: 18840 116049 23 5808 22138 21048 146984 13896 287 2181 0 7560 31380 0 9565 5106 0 6471 0 0 13896 0 0 0 0 0 0
bytes: 339548407 48622919 0 0 311167118 48622919 76846 13896
RPC iostats version: 1.0 p/v: 100003/3 (nfs)
xprt: tcp 875 1 7 0 0 85765 85764 1 206637 0 37 1776 35298
per-op statistics
NULL: 0 0 0 0 0 0 0 0
GETATTR: 18840 18840 0 2336164 2110080 92 8027 8817
SETATTR: 0 0 0 0 0 0 0 0
LOOKUP: 21391 21392 0 3877744 4562876 118 103403 105518
ACCESS: 20183 20188 0 2584304 2421960 72 10122 10850
READLINK: 0 0 0 0 0 0 0 0
READ: 3425 3425 0 465848 311606600 340 97323 97924
WRITE: 2422 2422 0 48975488 387520 763 200645 201522
CREATE: 2616 2616 0 447392 701088 21 870 1088
MKDIR: 858 858 0 188760 229944 8 573 705
SYMLINK: 0 0 0 0 0 0 0 0
MKNOD: 0 0 0 0 0 0 0 0
REMOVE: 47 47 0 6440 6768 0 8 76
RMDIR: 23 23 0 4876 3312 0 3 5
RENAME: 23 23 0 7176 5980 0 5 6
LINK: 0 0 0 0 0 0 0 0
READDIR: 160 160 0 23040 4987464 0 16139 16142
READDIRPLUS: 15703 15703 0 2324044 8493604 43 1041634 1041907
FSSTAT: 1 1 0 124 168 0 0 0
FSINFO: 2 2 0 248 328 0 0 0
PATHCONF: 1 1 0 124 140 0 0 0
COMMIT: 68 68 0 9248 10336 2 272 275...
about my bladefs. I am interested in READ operation statistics. As I know the last column (97924) means:
execute: How long ops of this type take to execute (from
rpc_init_task to rpc_exit_task) (microsecond)
How to interpret this? Is it the average time of each read operation regardless of the block size? I have very strong suspicion that I have problems with NFS: am I right? The value of 0.1 sec looks bad for me, but I am not sure how exactly to interpret this time: average, some sum...?
After reading the kernel source, the statistics are printed from net/sunrpc/stats.c rpc_clnt_show_stats() and the 8th column of per-op statistics statistics seems to printed from _print_rpc_iostats, it's printing struct rpc_iostats member om_execute. (The newest kernel has 9 columns with errors on the last column.)
That member looks to be only referenced/actually changed in rpc_count_iostats_metrics with:
execute = ktime_sub(now, task->tk_start);
op_metrics->om_execute = ktime_add(op_metrics->om_execute, execute);
Assuming ktime_add does what it says, the value of om_execute only increases. So the 8th column of mountstats would be the sum of the time of operations of this type.
We have deployed a global Apache Cassandra cluster (node: 12, RF: 3, version: 3.11.2) in our production environment. We are running into an issue where running major compaction on column family is failing to clear tombstones from one node (out of 3 replicas) even though metadata information shows min timestamp passed gc_grace_seconds set on the table.
Here is sstable metadata output
SSTable: mc-4302-big
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Bloom Filter FP chance: 0.010000
Minimum timestamp: 1
Maximum timestamp: 1560326019515476
SSTable min local deletion time: 1560233203
SSTable max local deletion time: 2147483647
Compressor: org.apache.cassandra.io.compress.LZ4Compressor
Compression ratio: 0.8808303792058351
TTL min: 0
TTL max: 0
First token: -9201661616334346390 (key=bca773eb-ecbb-49ec-9330-cc16da310b58:::)
Last token: 9117719078924671254 (key=7c23b975-5354-4c82-82e5-1762bac75a8d:::)
minClustringValues: [00000f8f-74a9-4ce3-9d87-0a4dabef30c1]
maxClustringValues: [ffffc966-a02c-4e1f-bdd1-256556624288]
Estimated droppable tombstones: 46.31761624099541
SSTable Level: 0
Repaired at: 0
Replay positions covered: {}
totalColumnsSet: 0
totalRows: 618382
Estimated tombstone drop times:
1560233680: 353
1560234658: 237
1560235604: 176
1560236803: 471
1560237652: 402
1560238342: 195
1560239166: 373
1560239969: 356
1560240586: 262
1560241207: 247
1560242037: 387
1560242847: 357
1560243742: 280
1560244469: 283
1560245095: 353
1560245957: 357
1560246773: 362
1560247956: 449
1560249034: 217
1560249849: 310
1560251080: 296
1560251984: 304
1560252993: 239
1560253907: 407
1560254839: 977
1560255761: 671
1560256486: 317
1560257199: 679
1560258020: 703
1560258795: 507
1560259378: 298
1560260093: 2302
1560260869: 2488
1560261535: 2818
1560262176: 2842
1560262981: 1685
1560263708: 1830
1560264308: 808
1560264941: 1990
1560265753: 1340
1560266708: 2174
1560267629: 2253
1560268400: 1627
1560269174: 2347
1560270019: 2579
1560270888: 3947
1560271690: 1727
1560272446: 2573
1560273249: 1523
1560274086: 3438
1560275149: 2737
1560275966: 3487
1560276814: 4101
1560277660: 2012
1560278617: 1198
1560279680: 769
1560280441: 1337
1560281033: 608
1560281876: 2065
1560282546: 2926
1560283128: 6305
1560283836: 824
1560284574: 71
1560285166: 140
1560285828: 118
1560286404: 83
1560295835: 72
1560296951: 456
1560297814: 670
1560298496: 271
1560299333: 473
1560300159: 284
1560300831: 127
1560301551: 536
1560302309: 425
1560303302: 860
1560304064: 465
1560304782: 319
1560305657: 323
1560306552: 236
1560307454: 368
1560308409: 320
1560309178: 210
1560310091: 177
1560310881: 85
1560311970: 147
1560312706: 76
1560313495: 88
1560314847: 687
1560315817: 1618
1560316544: 1245
1560317423: 5361
1560318491: 2060
1560319595: 5853
1560320587: 5390
1560321473: 3868
1560322644: 5784
1560323703: 6861
1560324838: 7200
1560325744: 5642
Count Row Size Cell Count
1 0 3054
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
10 0 0
12 0 0
14 0 0
17 0 0
20 0 0
24 0 0
29 0 0
35 0 0
42 0 0
50 0 0
60 98 0
72 49 0
86 46 0
103 2374 0
124 39 0
149 36 0
179 43 0
215 18 0
258 26 0
310 24 0
372 18 0
446 16 0
535 19 0
642 27 0
770 17 0
924 12 0
1109 14 0
1331 23 0
1597 20 0
1916 12 0
2299 11 0
2759 11 0
3311 11 0
3973 12 0
4768 5 0
5722 8 0
6866 5 0
8239 5 0
9887 6 0
11864 5 0
14237 10 0
17084 1 0
20501 8 0
24601 2 0
29521 2 0
35425 3 0
42510 2 0
51012 2 0
61214 1 0
73457 2 0
88148 3 0
105778 0 0
126934 3 0
152321 2 0
182785 1 0
219342 0 0
263210 0 0
315852 0 0
379022 0 0
454826 0 0
545791 0 0
654949 0 0
785939 0 0
943127 0 0
1131752 0 0
1358102 0 0
1629722 0 0
1955666 0 0
2346799 0 0
2816159 0 0
3379391 1 0
4055269 0 0
4866323 0 0
5839588 0 0
7007506 0 0
8409007 0 0
10090808 1 0
12108970 0 0
14530764 0 0
17436917 0 0
20924300 0 0
25109160 0 0
30130992 0 0
36157190 0 0
43388628 0 0
52066354 0 0
62479625 0 0
74975550 0 0
89970660 0 0
107964792 0 0
129557750 0 0
155469300 0 0
186563160 0 0
223875792 0 0
268650950 0 0
322381140 0 0
386857368 0 0
464228842 0 0
557074610 0 0
668489532 0 0
802187438 0 0
962624926 0 0
1155149911 0 0
1386179893 0 0
1663415872 0 0
1996099046 0 0
2395318855 0 0
2874382626 0
3449259151 0
4139110981 0
4966933177 0
5960319812 0
7152383774 0
8582860529 0
10299432635 0
12359319162 0
14831182994 0
17797419593 0
21356903512 0
25628284214 0
30753941057 0
36904729268 0
44285675122 0
53142810146 0
63771372175 0
76525646610 0
91830775932 0
110196931118 0
132236317342 0
158683580810 0
190420296972 0
228504356366 0
274205227639 0
329046273167 0
394855527800 0
473826633360 0
568591960032 0
682310352038 0
818772422446 0
982526906935 0
1179032288322 0
1414838745986 0
Estimated cardinality: 3054
EncodingStats minTTL: 0
EncodingStats minLocalDeletionTime: 1560233203
EncodingStats minTimestamp: 1
KeyType: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
ClusteringTypes: [org.apache.cassandra.db.marshal.UUIDType]
StaticColumns: {}
RegularColumns: {}
So far here is what we have tried,
1) major compaction with lower gc_grace_seconds
2) nodetool garbagecollect
3) nodetool scrub
None of the above methods is helping. Again, this is only happening for one node (out of total 3 replicas)
The tombstone markers generated during your major compaction are just that, markers. The data has been removed but a delete marker is left in place so that the other replicas can have gc_grace_seconds to process them too. The tombstone markers are fully dropped the next time the SSTable is compacted. Unfortunately because you've run a major compaction (rarely ever recommended) it may be a long time until there are suitable SSTables for compaction with it to clean up the tombstones. Remember that the tombstone drop will also only happen after local_delete_time + gc_grace_seconds as defined by the table.
If you're interested in learning more about how tombstones and compaction work together in the context of delete operations I suggest reading the following articles:
https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html
https://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html
Uploading a file using cURL in a Node.js child_process spawn.
It seems to be working fine (the file uploads without fail), but the output streams to stderr instead of stdout.
JS is
curlOps=["-T", ThisJobRslts.DestPath, "-u", "userid#ftpdomain.com:password", "ftp://host.address.com"];
spawn=require("child_process").spawn("curl", curlOps);
spawn.stdout.on("data", (data) => {
console.log("stdout ", data.toString("utf8"));
});
spawn.stderr.on("data", (data) => {
console.log("stderr " + data.toString("utf8"));
});
spawn.on("close", (data) => {
console.log("Closed", data);
});
Console gets:
stderr % Total % Received % Xferd Average Speed Time Time Time Cur
stderr rent
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
stderr 100 120 0 0 100 120 0 99 0:00:01 0:00:01 --:--:
stderr -- 99
stderr 100 120 0 0 100 120 0 54 0:00:02 0:00:02 --:--:--
stderr 54
stderr 100 120 0 0 100 120 0 37 0:00:03 0:00:03 --:--:-- 37
stderr 100 120 0 0 100 120 0 28 0
stderr :00:04 0:00:04 --:--:-- 28
stderr 100 120 0 0 100 120 0 23 0:00:05 0:00:
stderr 05 --:--:-- 23
stderr 100 120 0 0 100 120 0 19 0:00:06 0:00:06 --:--:
stderr -- 0
stderr 100 120 0 0 100 120 0 16 0:00
stderr :07 0:00:07 --:--:-- 0
stderr 100 120 0 0 100 120 0 14 0:00:08 0:00:08 --:--:-- 0
stderr 100 120 0 0 100 120 0 13 0:00:09 0:00:
stderr 09 --:--:-- 0
stderr 100 120 0 0 100 120 0 11 0:00
stderr :10 0:00:10 --:--:-- 0
stderr 100 161 0 41 100 120 3 11 0:00:10 0:00:10 --:--:-
stderr - 0
I see bytes being written on commit log is in MB's but the data that is being sent was actually couple of MBs(< 4 MB). Not sure why am I seeing such stats?
Here are dstats o/p of my disk(commitlog)
date/time |usr sys idl wai hiq siq| 1m 5m 15m | read writ| read writ|util| recv send
23-03 12:08:06| 27 4 66 2 0 0|13.8 6.14 3.50| 0 110M| 0 893 |66.8| 73M 79M
23-03 12:08:07| 29 5 64 2 0 0|13.8 6.14 3.50| 0 119M| 0 970 |58.8| 84M 81M
23-03 12:08:08| 29 4 64 3 0 0|13.8 6.14 3.50| 0 114M| 0 925 |70.4| 76M 75M
23-03 12:08:09| 30 6 63 2 0 0|13.2 6.13 3.52| 0 104M| 0 852 |58.0| 84M 73M
23-03 12:08:10| 30 5 63 2 0 0|13.2 6.13 3.52| 0 147M| 0 1190 |62.4| 92M 93M
23-03 12:08:11| 30 4 64 2 0 0|13.2 6.13 3.52| 0 113M| 0 923 |61.6| 77M 74M
23-03 12:08:12| 26 4 67 2 0 0|13.2 6.13 3.52| 0 134M| 0 1094 |56.0| 94M 90M
23-03 12:08:13| 39 5 54 1 0 0|13.2 6.13 3.52| 0 121M| 0 986 |54.4| 98M 88M
23-03 12:08:14| 25 4 68 3 0 0|12.7 6.15 3.53| 0 121M| 0 979 |71.2| 99M 87M
23-03 12:08:15| 36 6 55 3 0 0|12.7 6.15 3.53| 0 123M| 0 993 |62.0| 90M 93M
23-03 12:08:16| 31 6 60 2 0 0|12.7 6.15 3.53| 0 106M| 0 854 |54.8| 98M 104M
23-03 12:08:17| 37 6 54 2 0 1|12.7 6.15 3.53| 0 133M| 0 1067 |59.2| 92M 93M
23-03 12:08:18| 27 4 66 3 0 0|12.7 6.15 3.53| 0 116M| 0 936 |64.8| 97M 96M
23-03 12:08:19| 33 6 59 2 0 0|
I am using autobench for doing becnhmark. An example of autobench command is as shown below.
autobench --single_host --host1 testhost.foo.com --uri1 /index.html --quiet
--timeout 5 --low_rate 20 --high_rate 200 --rate_step 20 --num_call 10
--num_conn 5000 --file bench.tsv**
The uri which I have to specify has a query attached to it. When I run the command which has the query, I get the following result
dem_req_rate req_rate_localhost con_rate_localhost min_rep_rate_localhost avg_rep_rate_localhost max_rep_rate_localhost stddev_rep_rate_localhost resp_time_localhost net_io_localhost errors_localhost
200 0 20 0 0 0 0 0 0 101
400 0 40 0 0 0 0 0 0 101
600 0 60 0 0 0 0 0 0 101
800 0 80 0 0 0 0 0 0 101
1000 0 100 0 0 0 0 0 0 101
1200 0 120 0 0 0 0 0 0 101
1400 0 140 0 0 0 0 0 0 101
1600 0 160 0 0 0 0 0 0 101
1800 0 180 0 0 0 0 0 0 101
2000 0 200 0 0 0 0 0 0 101
The query request, response are all zeroes. Can anybody please tell me how to give a query as part of the uri?
Thank you in advance
It worked for me when I surrounded the uri containing the query string in single quotes. Something like:
uri1 '/my/uri/query?string'