sysbase 0 workerprocess - sap-ase

I'm trying to analyse the output of sp_sysmon , the problem is that the number of worker process is null
here is the query :
sp_sysmon begin_sample
go
select top 1000 * from BEVENTS
go
sp_sysmon end_sample, wpm
and this is the output :
Worker Process Requests
Total Requests 0.0 0.0 0 n/a
Worker Process Usage
Total Used 0.0 0.0 0 n/a
Max Ever Used During Sample 0.0 0.0 0 n/a
Memory Requests for Worker Processes
Total Requests 0.0 0.0 0 n/a

Having 0 Worker Process Requests in itself isn't a problem. But if you want to see them, run a dbcc checkstorage during the sp_sysmon run.

Related

Advise on stopping compaction to reduce slowness

I am seeing high CPU and memory usage of cassandra on the seed node. Is it advisable to stop compaction(nodetool stop) and enable in offpeak hours. Should I do manual compaction or enable autocompaction. I see lot of Native-Transport-Requests. I have three seed nodes. This is the first seed node.
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 54255 0 0
MiscStage 0 0 0 0 0
CompactionExecutor 2 2566 352765 0 0
MutationStage 0 0 2659921760 0 0
MemtableReclaimMemory 0 0 180958 0 0
PendingRangeCalculator 0 0 21 0 0
GossipStage 0 0 338375 0 0
SecondaryIndexManagement 0 0 0 0 0
HintsDispatcher 0 0 63 0 0
RequestResponseStage 0 1 1684328696 0 0
Native-Transport-Requests 4 0 1538523706 0 47006391
ReadRepairStage 0 0 2197 0 0
CounterMutationStage 0 0 0 0 0
MigrationStage 0 0 0 0 0
MemtablePostFlush 1 1 216220 0 0
PerDiskMemtableFlushWriter_0 1 1 180958 0 0
ValidationExecutor 0 0 33250 0 0
Sampler 0 0 0 0 0
MemtableFlushWriter 1 1 180958 0 0
InternalResponseStage 0 0 141677 0 0
ViewMutationStage 0 0 0 0 0
AntiEntropyStage 0 0 166254 0 0
CacheCleanupExecutor 0 0 0 0 0
Repair#9 0 0 5719 0 0
I do see high compactions. Is it advisable to disable compactions using nodetool stop
$ nodetool info
ID : ebeda774-cea8-40bb-9322-69c6fcded5a9
Gossip active : true
Thrift active : true
Native Transport active: true
Load : 535.37 GiB
Generation No : 1636316595
Uptime (seconds) : 73152
Heap Memory (MB) : 19542.18 / 32168.00
Off Heap Memory (MB) : 1337.98
Data Center : us-west2
Rack : a
Exceptions : 15
Key Cache : entries 152283, size 23.07 MiB, capacity 100 MiB, 23835 hits, 280738 requests, 0.085 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests, NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 50 MiB, 0 hits, 0 requests, NaN recent hit rate, 7200 save period in seconds
Chunk Cache : entries 6782, size 423.88 MiB, capacity 480 MiB, 23947952 misses, 24381819 requests, 0.018 recent hit rate, 250.977 microseconds miss latency
Percent Repaired : 0.49796724500672584%
Token : (invoke with -T/--tokens to see all 256 tokens)
$ free -h
total used free shared buff/cache available
Mem: 62G 53G 658M 1.0M 8.5G 8.5G
Swap: 0B 0B 0B
~$ nodetool compactionstats
pending tasks: 197
....
id compaction type keyspace table completed total unit progress
5e555610-40b2-11ec-9b5a-27bc920e6e55 Compaction mykeyspace table1 27299674 89930474 bytes 30.36%
5e55f251-40b2-11ec-9b5a-27bc920e6e55 Compaction mykeyspace table2 13922048 74426264 bytes 18.71%
Active compaction remaining time : 0h00m02s
I would definitely not run compaction manually. Much of the compaction thresholds are file-size based, which means that forcing it creates files sized outside of the normal progression. The result, is that the chances of compaction running on that table again are extremely slim. Basically, once you start down that path, you'll be running manual compactions forever.
I would also say that compaction is a good thing. You want it to happen, as compacted files are necessary to keep reads performing well. Of course, that's not much of a consolation when the compaction process is affecting operational activity.
tl;dr;
One I have done in the past, is to lower compaction throughput during the day. Not sure what throughput you're running with currently, but you can find this out by running nodetool getcompactionthroughput:
% bin/nodetool getcompactionthroughput
Current compaction throughput: 64 MB/s
So at the times when customer/operational traffic is high, you can reduce that significantly:
% bin/nodetool setcompactionthroughput 1
% bin/nodetool getcompactionthroughput
Current compaction throughput: 1 MB/s
1 MB / second is the lowest that compaction throughput can be set. If you set it to zero, it's "un-throttled," which means it'll consume all the resources that it can get at. Setting it to 1 brings its resource use (and speed) down to a trickle.
Once the busy daily traffic subsides, that setting can be turned back up:
% bin/nodetool setcompactionthroughput 256
Current compaction throughput: 256 MB/s
This can be accomplished with a scheduled job for each command.

CounterMutationStage and ViewMutationStage metrics are missing in Cassandra 4.0

When invoking nodetool tpstats on Cassandra 4.0, here is what I got nodetool result screenshot
But no CounterMutationStage and ViewMutationStage found. Where are they?
Those metrics are still there. The issue though, is that they expose their data "lazily." Which basically means, they won't show at all when the value is zero. Once you start writing to counters or views, those metrics execute their "lazy initialization," and only then are they exposed. I tested this out using Cassandra 4.0 beta4.
Running a baseline nodetool tpstats | head -n 4:
Pool Name Active Pending Completed Blocked All time blocked
MutationStage 0 0 1 0 0
ReadStage 0 0 27 0 0
CompactionExecutor 0 0 41 0 0
Next, I'll create a simple counter table.
CREATE TABLE games_popularity (game text PRIMARY KEY, popularity counter);
I'll increment the counter a few times and SELECT it.
aploetz#cqlsh> SELECT * FROM stackoverflow.games_popularity ;
game | popularity
----------------+------------
Cyberpunk 2077 | 3
(1 rows)
Now rerunning the nodetool tpstats | head -n 4 indeed show CounterMutationStage:
Pool Name Active Pending Completed Blocked All time blocked
MutationStage 0 0 12 0 0
CounterMutationStage 0 0 3 0 0
ReadStage 0 0 96 0 0
Note that in 4.0 these metrics are also exposed in the system_view.thread_pools virtual table, which you can view with SELECT * FROM system_views.thread_pools;.
Thanks to the good work that have been done by Cassandra developers, the metrics are now lazy initialised to improve the performance.
The best way to "wake up" all lazy metrics is:
nodetool getconcurrency

CPU speed changes by a multiply of 2 for short durations

I'm using a raspberry pi and I need really fast performance from my CPU for a certain process.
To achieve that, I added isolcpus=3 to my kernel boot parameters, to isolate the core for this process only.
From looking at /proc/interrupts, it seems that this core irqs are also minimal (after isolation).
Now, I'm running this code on the isolated CPU (taskset -p 8 PID):
for (i=0; i<254; i++) {
clock_gettime(CLOCK_REALTIME, &start);
for (rep=0; rep<10000000; rep++) {
}
clock_gettime(CLOCK_REALTIME, &end);
timespec_diff(&start, &end, &diff);
printf("%d\n", diff.tv_nsec);
}
The output is see is:
133562686, 133525447, 133536802, 133525760, 133540134, 133555290, 133540135, 133542218, 133525552, 133524979, 133577791, 133523208, 133525604, 133545916, 87085933, 66719079, 66719339, 66726787, 66719912, 66718870, 66712048, 76724670, 133535917, 133525396, 133528260, 133578416, 133522740, 133525552, 133541177, 133526021, 133553677, 133541906
This is only part of the output. The time is usually consistent on ~133525760, but sometimes it gets faster for a little while, by a multiply of 2.
The tasks running on core 3 are:
PID TID CLS RTPRIO NI PRI PSR %CPU STAT WCHAN COMMAND
22 22 TS - 0 19 3 0.0 S - cpuhp/3
23 23 FF 99 - 139 3 0.0 S - migration/3
24 24 TS - 0 19 3 0.0 S - ksoftirqd/3
25 25 TS - 0 19 3 0.0 S - kworker/3:0
26 26 TS - -20 39 3 0.0 S< - kworker/3:0H
1158 1158 TS - -20 39 3 0.0 S< - kworker/3:1H
1159 1159 TS - 0 19 3 0.0 S - kworker/3:1
5907 5907 TS - 0 19 3 99.1 R - a.out
According to ps, the usage percentage of my process varies between 99 to 100 percent of the CPU (which I also don't understand why it is not consistent on 100%), so the fact that the time is divided by 2 doesn't make sense.
Both speeds are good enough for me, I just need it to be consistent.
Does anyone have an idea why could this happen? Is there any way I can make my loop time consistent?

/proc/[pid]/stat refresh period

hi I am a Linux programmer
I have an order that monitor process cpus usage, so I use data on /proc/[pid]/stat № 14 and 15. That values are called utime and stime.
Example [/proc/[pid]/stat]
30182 (TTTTest) R 30124 30182 30124 34845 30182 4218880 142 0 0 0 5274 0 0 0 20 0 1 0 55611251 17408000 386 18446744073709551615 4194304 4260634 140733397159392 140733397158504 4203154 0 0 0 0 0 0 0 17 2 0 0 0 0 0 6360520 6361584 33239040 140733397167447 140733397167457 140733397167457 140733397168110 0
State after 5 sec
30182 (TTTTest) R 30124 30182 30124 34845 30182 4218880 142 0 0 0 5440 0 0 0 20 0 1 0 55611251 17408000 386 18446744073709551615 4194304 4260634 140733397159392 140733397158504 4203154 0 0 0 0 0 0 0 17 2 0 0 0 0 0 6360520 6361584 33239040 140733397167447 140733397167457 140733397167457 140733397168110 0
In test environment, this file refreshed 1 ~ 2 sec, so I assume this file often updated by system at least 1 sec.
So I use this calculation
process_cpu_usage = ((utime - old_utime) + (stime - old_stime))/ period
In case of above values
33.2 = ((5440 - 5274) + (0 - 0)) / 5
But, In commercial servers environment, process run with high load (cpu and file IO), /proc/[pid]/stat file update period increasing even 20~60 sec!!
So top/htop utility can't measure correct process usage value.
Why is this phenomenon occurring??
Our system is [CentOS Linux release 7.1.1503 (Core)]
Most (if not all) files in the /proc filesystem are special files, their content at any given moment reflect the actual OS/kernel data at that very moment, they're not files with contents periodically updated. See the /proc filesystem doc.
In particular the /proc/[pid]/stat content changes whenever the respective process state changes (for example after every scheduling event) - for processes mostly sleeping the file will appear to be "updated" at slower rates while for active/running processes at higher rates on lightly loaded systems. Check, for example, the corresponding files for a shell process which doesn't do anything and for a browser process playing some video stream.
On heavily loaded systems with many processes in the ready state (like the one mentioned in this Q&A, for example) there can be scheduling delays making the file content "updates" appear less often despite the processes being ready/active. Such conditions seem to be more often encountered in commercial/enterprise environments (debatable, I agree).

Sphinx claiming memory is too low and my ids are null

I am trying to index about 3,000 document but here is what I am getting
[root#domU-12-31-39-0A-19-CB data]# /usr/local/sphinx/bin/indexer --all
Sphinx 2.0.4-release (r3135)
Copyright (c) 2001-2012, Andrew Aksyonoff
Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/usr/local/sphinx/etc/sphinx.conf'...
indexing index 'catalog'...
WARNING: Attribute count is 0: switching to none docinfo
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 12288 kb
WARNING: source catalog: skipped 3558 document(s) with zero/NULL ids
collected 0 docs, 0.0 MB
total 0 docs, 0 bytes
total 0.040 sec, 0 bytes/sec, 0.00 docs/sec
total 1 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 5 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
I have it set to rt_mem_limit = 512M why is it telling me I dont have enough memory?
rt_mem_limit != mem_limit - they are different variables - with different purposes.
mem_limit - is the value used by indexer during indexing
http://sphinxsearch.com/docs/current.html#conf-mem-limit
- its in the 'indexer' section of your config file.
You must have it sent too loo. Either just leave it out (to use 32M), or change it to better value.
But you also have no document_ids in your dataset. Check your sql_query actully works.

Resources