Azure SQL Full Text Index initial population slow - azure

I have a table with approximately 4.7 million records. I created a full text index on it.
I am experiencing slow initial population of the full text index.
Initial pricing tier that i had was S1, I upgraded it to S3 but i did not get better performance.
DTU and CPU are not high (usually staying around 0% ), the current velocity is about 175000 records per hour.
What can i do to speed this up?
Thanks in advance.
LE.
I tried same operation on a local instalation of SQL Server 2014, i had no problems with indexing the data.
Update 14.11.2016
Output to dm_Exec_requests
session_id request_id start_time status command sql_handle statement_start_offset statement_end_offset plan_handle database_id user_id connection_id blocking_session_id wait_type wait_time last_wait_type wait_resource open_transaction_count open_resultset_count transaction_id context_info percent_complete estimated_completion_time cpu_time total_elapsed_time scheduler_id task_address reads writes logical_reads text_size language date_format date_first quoted_identifier arithabort ansi_null_dflt_on ansi_defaults ansi_warnings ansi_padding ansi_nulls concat_null_yields_null transaction_isolation_level lock_timeout deadlock_priority row_count prev_error nest_level granted_query_memory executing_managed_code group_id query_hash query_plan_hash statement_sql_handle statement_context_id dop parallel_worker_count external_script_request_id
90 0 57:45.2 running SELECT 0x020000004D4F6005A3E8119F3DD3297095832ABE63E312F20000000000000000000000000000000000000000 0 66 0x060005004D4F6005D04F998A6E00000001000000000000000000000000000000000000000000000000000000 5 1 70A61674-396D-47EB-82C7-F3C13DAA2AD0 0 NULL 0 MEMORY_ALLOCATION_EXT 0 1 141037 0x380035003100450039003200350032002D0045003700450032002D0034003600320041002D0039004200390041002D003200310037004400300036003700430032004100360039 0 0 1 1 0 0x7A218C885C2F7437 0 0 228 2147483647 us_english mdy 7 1 1 1 0 1 1 1 1 2 -1 0 1 0 0 0 0 2000000026 0xC1681A4180C2C052 0x63AD167562BDAE5D 0x0900A3E8119F3DD3297095832ABE63E312F20000000000000000000000000000000000000000000000000000 7 1 NULL NULL
As i can see on P1 this seems much faster though. It is strange because it is not much more powerfull than S3.
I will mark it as solved because, it seems this is an issue related to service tier levels.

If you bump up the service tier of Azure database the full text indexing will run much faster than at standard level.
I could not sense a diference between S1 and S3, but P1 versus S3 is much faster.
I do not know the resoning behind this, even though the diference in DTU's is only 25 (S3: 100 DTU, P1: 125 DTU )

Related

CounterMutationStage and ViewMutationStage metrics are missing in Cassandra 4.0

When invoking nodetool tpstats on Cassandra 4.0, here is what I got nodetool result screenshot
But no CounterMutationStage and ViewMutationStage found. Where are they?
Those metrics are still there. The issue though, is that they expose their data "lazily." Which basically means, they won't show at all when the value is zero. Once you start writing to counters or views, those metrics execute their "lazy initialization," and only then are they exposed. I tested this out using Cassandra 4.0 beta4.
Running a baseline nodetool tpstats | head -n 4:
Pool Name Active Pending Completed Blocked All time blocked
MutationStage 0 0 1 0 0
ReadStage 0 0 27 0 0
CompactionExecutor 0 0 41 0 0
Next, I'll create a simple counter table.
CREATE TABLE games_popularity (game text PRIMARY KEY, popularity counter);
I'll increment the counter a few times and SELECT it.
aploetz#cqlsh> SELECT * FROM stackoverflow.games_popularity ;
game | popularity
----------------+------------
Cyberpunk 2077 | 3
(1 rows)
Now rerunning the nodetool tpstats | head -n 4 indeed show CounterMutationStage:
Pool Name Active Pending Completed Blocked All time blocked
MutationStage 0 0 12 0 0
CounterMutationStage 0 0 3 0 0
ReadStage 0 0 96 0 0
Note that in 4.0 these metrics are also exposed in the system_view.thread_pools virtual table, which you can view with SELECT * FROM system_views.thread_pools;.
Thanks to the good work that have been done by Cassandra developers, the metrics are now lazy initialised to improve the performance.
The best way to "wake up" all lazy metrics is:
nodetool getconcurrency

"vmstat" and "perf stat -a" show different numbers for context-switching

I'm trying to understand the context-switching rate on my system (running on AWS EC2), and where the switches are coming from. Just getting the number is already confusing, as two tools that I know can output such a metric give me different results. Here's the output from vmstat:
$ vmstat -w 2
procs -------------------memory------------------ ---swap-- -----io---- --system-- -----cpu-------
r b swpd free buff cache si so bi bo in cs us sy id wa st
8 0 0 443888 492304 8632452 0 0 0 1 0 0 14 2 84 0 0
37 0 0 444820 492304 8632456 0 0 0 20 131602 155911 43 5 52 0 0
8 0 0 445040 492304 8632460 0 0 0 42 131117 147812 46 4 50 0 0
13 0 0 446572 492304 8632464 0 0 0 34 129154 142260 49 4 46 0 0
The number is ~140k-160k/sec.
But perf tells something else:
$ sudo perf stat -a
Performance counter stats for 'system wide':
2980794.013800 cpu-clock (msec) # 35.997 CPUs utilized
12,335,935 context-switches # 0.004 M/sec
2,086,162 cpu-migrations # 0.700 K/sec
11,617 page-faults # 0.004 K/sec
...
0.004 M/sec is apparently 4k/sec.
Why is there a disparity between the two tools? Am I misinterpreting something in either of them, or are their CS metrics somehow different?
FWIW, I've tried doing the same on a machine running a different workload, and the difference there is even twice larger.
Environment:
AWS EC2 c5.9xlarge instance
Amazon Linux, kernel 4.14.94-73.73.amzn1.x86_64
The service runs on Docker 18.06.1-ce
Some recent versions of perf have a unit-scaling bug in the printing code. Manually do 12.3M / wall-time and see if that's sane. (spoiler alert: it is according to OP's comment.)
https://lore.kernel.org/patchwork/patch/1025968/
Commit 0aa802a79469 ("perf stat: Get rid of extra clock display
function") introduced the bug in mainline Linux 4.19-rc1 or so.
Thus, perf_stat__update_shadow_stats() now saves scaled values of clock events
in msecs, instead of original nsecs. But while calculating values of
shadow stats we still consider clock event values in nsecs. This results
in a wrong shadow stat values.
Commit 57ddf09173c1 on Mon, 17 Dec 2018 fixed it in 5.0-rc1, eventually being released with perf upstream version 5.0.
Vendor kernel trees that cherry-pick commits for their stable kernels might have the bug or have fixed the bug earlier.

How do I interpret the includedQuantity in Azure's rate card API?

I am trying to calculate the cost incurred on my Azure Pay-As-You-Go subscription using the usage and rate-card API's. For this I came across this parameter includedQuantity in the rate-card API's which, according to the documentation, refers to "The resource quantity that is included in the offer at no cost. Consumption beyond this quantity will be charged."
Consider an example, where the usageQuantity is 700 and the rate-card is as follows:
0 : 20
101 : 15
501 : 10
and the includedQuantity is 200.
My assumption was, the calculation would be as one of the following:
Quantity = (700 - 200) = 500
Hence, cost = 100 * 20 + 400 * 15 = 8000
New rate card:
0 : 0
101 : 0
201 : 15
501 : 10
So, cost = 300 * 15 + 200 * 10 = 6500
I have seen this question, but it does not clarify the includedQuantity properly.
Great question! So I checked with Azure Billing team on this and what they told me is that they will first take off the included units (200 in your example) and then apply graduated pricing on the remaining units.
Based on this, your cost would be 4500:
Total units consumed: 700
Included units: 200
Tiered pricing: {0-100 = 0; 101-200 = 0; 201-500=15; 501-No Upper Limit=10}
4500 = 0 x 100 + 0 x 100 + 15 x 300

Cassandra NoHostAvailableException when deletes are executed with cqlsh

We have a cluster with 7 nodes and we use the datastax java driver to connect to the cluster. The problem is that I am getting constant NoHostAvailableException like this:
Caused by:
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /172.31.7.243:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout while
trying to acquire available connection (you may want to increase the
driver number of per-host connections)), /172.31.7.245:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout while
trying to acquire available connection (you may want to increase the
driver number of per-host connections)), /172.31.7.246:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout while
trying to acquire available connection (you may want to increase the
driver number of per-host connections)), /172.31.7.247:9042,
/172.31.7.232:9042, /172.31.7.233:9042, /172.31.7.244:9042 [only
showing errors of first 3 hosts, use getErrors() for more details])
All the nodes are up:
UN 172.31.7.244 152.21 GB 256 14.5% 58abea69-e7ba-4e57-9609-24f3673a7e58 RAC1
UN 172.31.7.245 168.4 GB 256 14.5% bc11b4f0-cf96-4ca5-9a3e-33cc2b92a752 RAC1
UN 172.31.7.246 177.71 GB 256 13.7% 8dc7bb3d-38f7-49b9-b8db-a622cc80346c RAC1
UN 172.31.7.247 158.57 GB 256 14.1% 94022081-a563-4042-81ab-75ffe4d13194 RAC1
UN 172.31.7.243 176.83 GB 256 14.6% 0dda3410-db58-42f2-9351-068bdf68f530 RAC1
UN 172.31.7.233 159 GB 256 13.6% 01e013fb-2f57-44fb-b3c5-fd89d705bfdd RAC1
UN 172.31.7.232 166.05 GB 256 15.0% 4d009603-faa9-4add-b3a2-fe24ec16a7c1 RAC1
but two of them have high cpu load, especially the 232 because I am running a lot of deletes using cqlsh in that node.
I know that deletes generate tombstones, but with 7 nodes in the cluster I do not think is normal that all the host are not accesible.
Our configuration for the java connection is:
com.datastax.driver.core.Cluster cluster = null;
//Get contact points
String[] contactPoints=this.environment.getRequiredProperty(CASSANDRA_CLUSTER_URL).split(",");
cluster = com.datastax.driver.core.Cluster.builder()
.addContactPoints(contactPoints))
.withCredentials(this.environment.getRequiredProperty(CASSANDRA_CLUSTER_USERNAME),
this.environment.getRequiredProperty(CASSANDRA_CLUSTER_PASSWORD))
.withQueryOptions(new QueryOptions()
.setConsistencyLevel(ConsistencyLevel.QUORUM))
.withLoadBalancingPolicy(new TokenAwarePolicy(new RoundRobinPolicy()))
.withRetryPolicy(new LoggingRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE))
.withPort(Integer.parseInt(this.environment.getRequiredProperty(CASSANDRA_CLUSTER_PORT)))
.build();
Metadata metadata = cluster.getMetadata();
for ( Host host : metadata.getAllHosts() ) {
LOG.info("Datacenter: "+host.getDatacenter()+"; Host: "+host.getAddress()+"; DC: "+host.getDatacenter()+"\n");
}
and the contact points are:
172.31.7.244,172.31.7.243,172.31.7.245,172.31.7.246,172.31.7.247
Anyone knows how I can solve this problem? Or at least have anyone some hint about how to deal with this situation?
Update: If I get the error messages withe.getErrors() I obtain:
/172.31.7.243:9042=com.datastax.driver.core.OperationTimedOutException: [/172.31.7.243:9042] Operation timed out,
/172.31.7.244:9042=com.datastax.driver.core.OperationTimedOutException: [/172.31.7.244:9042] Operation timed out,
/172.31.7.245:9042=com.datastax.driver.core.OperationTimedOutException: [/172.31.7.245:9042] Operation timed out,
/172.31.7.246:9042=com.datastax.driver.core.OperationTimedOutException: [/172.31.7.246:9042] Operation timed out,
/172.31.7.247:9042=com.datastax.driver.core.OperationTimedOutException: [/172.31.7.247:9042] Operation timed out}
UPDATE:
The replication factor of the keyspace is 3.
For the deletes Im running them using different files with the cql queries:
cqlsh ip_node_1 -f script-1.duplicates
cqlsh ip_node_1 -f script-2.duplicates
cqlsh ip_node_1 -f script-3.duplicates
...
I am not specifying any consistency level, so is using the default one which is ONE.
Each of the previous files contain deletes like this:
DELETE FROM keyspace_name.search WHERE idline1 = 837 and idline2 = 841 and partid = 8558 and id = 18c04c20-8a3a-11e5-9e20-0025905a2ab2;
And the column family is:
CREATE TABLE search (
idline1 bigint,
idline2 bigint,
partid int,
id uuid,
field3 int,
field4 int,
field5 int,
field6 int,
field7 int,
field8 int,
field9 double,
field10 bigint,
field11 bigint,
field12 bigint,
field13 boolean,
field14 boolean,
field15 int,
field16 bigint,
field17 int,
field18 int,
field19 int,
field20 int,
field21 uuid,
field22 boolean,
PRIMARY KEY ((idline1, idline2, partid), id)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='Table with the snp between lines' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=0 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
CREATE INDEX search_partid ON search (partid);
CREATE INDEX search_field8 ON search (field8);
UPDATE (18-03-2016):
After the deletes start to be executed I found the cpu of some of the nodes increases a lot:
I check the processes on that nodes and only cassandra is running but consuming a lot of cpu. The rest of the nodes are not using almost cpu.
UPDATE (04-04-2016): I do not know if it is related. I check the nodes which a lot of CPU (near 96%) and th gc activity remains on 1.6% (using only 3 GB from the 10 which have assigned).
Checing the thread pool stats:
nodetool tpstats
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 20042001 0 0
RequestResponseStage 0 0 149365845 0 0
MutationStage 32 117720 181498576 0 0
ReadRepairStage 0 0 799373 0 0
ReplicateOnWriteStage 0 0 13624173 0 0
GossipStage 0 0 5580503 0 0
CacheCleanupExecutor 0 0 0 0 0
AntiEntropyStage 0 0 32173 0 0
MigrationStage 0 0 9 0 0
MemtablePostFlusher 0 0 45044 0 0
MemoryMeter 0 0 9553 0 0
FlushWriter 0 0 9425 0 18
ValidationExecutor 0 0 15980 0 0
MiscStage 0 0 0 0 0
PendingRangeCalculator 0 0 7 0 0
CompactionExecutor 0 0 1293147 0 0
commitlog_archiver 0 0 0 0 0
InternalResponseStage 0 0 0 0 0
HintedHandoff 0 0 273 0 0
Message type Dropped
RANGE_SLICE 0
READ_REPAIR 0
PAGED_RANGE 0
BINARY 0
READ 0
MUTATION 0
_TRACE 0
REQUEST_RESPONSE 0
COUNTER_MUTATION 0
I realize that the pending mutation stages are growing but the active value remain the same, could be this the problem?
I see two problems with your datamodel.
You use two secondary indexes. One is on a field on the partition key. I don't know how cassandra behaves in this case. Worst case is, that even if you use the complete partition key (like you do in your example delete) cassandra does a lookup in the secondary index. In that case this would mean a full cluster scan, because secondary indexes are only stored per partition. Since only a part of the partition key is indexed cassandra does not know on which partition the index informations lies. This behavior at least would explain the timeouts.
You said, you delete a lot of rows in a specific partition. That is also a problem. For each deletion cassandra creates a tombstone. The more tombstones there are, the slower the read will become. This will sooner or later lead to timeouts or exceptions (I believe cassandra will write warnings when 1000 tombstones are reached and throw exceptions when 10.000 tombstones are reached). Btw. these tombstones are also created in the secondary index. By default cassandra will remove tombstones after gc_grace_seconds (by default 10 days) when a compaction is performed. You could change this property per table. More information on these table properties can be found here: Table Properties
I believe the first point could be the reason for the timeouts.

/proc/[pid]/stat refresh period

hi I am a Linux programmer
I have an order that monitor process cpus usage, so I use data on /proc/[pid]/stat № 14 and 15. That values are called utime and stime.
Example [/proc/[pid]/stat]
30182 (TTTTest) R 30124 30182 30124 34845 30182 4218880 142 0 0 0 5274 0 0 0 20 0 1 0 55611251 17408000 386 18446744073709551615 4194304 4260634 140733397159392 140733397158504 4203154 0 0 0 0 0 0 0 17 2 0 0 0 0 0 6360520 6361584 33239040 140733397167447 140733397167457 140733397167457 140733397168110 0
State after 5 sec
30182 (TTTTest) R 30124 30182 30124 34845 30182 4218880 142 0 0 0 5440 0 0 0 20 0 1 0 55611251 17408000 386 18446744073709551615 4194304 4260634 140733397159392 140733397158504 4203154 0 0 0 0 0 0 0 17 2 0 0 0 0 0 6360520 6361584 33239040 140733397167447 140733397167457 140733397167457 140733397168110 0
In test environment, this file refreshed 1 ~ 2 sec, so I assume this file often updated by system at least 1 sec.
So I use this calculation
process_cpu_usage = ((utime - old_utime) + (stime - old_stime))/ period
In case of above values
33.2 = ((5440 - 5274) + (0 - 0)) / 5
But, In commercial servers environment, process run with high load (cpu and file IO), /proc/[pid]/stat file update period increasing even 20~60 sec!!
So top/htop utility can't measure correct process usage value.
Why is this phenomenon occurring??
Our system is [CentOS Linux release 7.1.1503 (Core)]
Most (if not all) files in the /proc filesystem are special files, their content at any given moment reflect the actual OS/kernel data at that very moment, they're not files with contents periodically updated. See the /proc filesystem doc.
In particular the /proc/[pid]/stat content changes whenever the respective process state changes (for example after every scheduling event) - for processes mostly sleeping the file will appear to be "updated" at slower rates while for active/running processes at higher rates on lightly loaded systems. Check, for example, the corresponding files for a shell process which doesn't do anything and for a browser process playing some video stream.
On heavily loaded systems with many processes in the ready state (like the one mentioned in this Q&A, for example) there can be scheduling delays making the file content "updates" appear less often despite the processes being ready/active. Such conditions seem to be more often encountered in commercial/enterprise environments (debatable, I agree).

Resources