singlestore ( formerly memsql) Too many queries are queued for execution, exceeding the limit of 100 - singlestore

I am using singleStore with 4 pretty strong leaf node machine but keep getting this error when serving a lot of traffic
this is the current resource pool config
[
{
"Pool_Name": "default_pool",
"Memory_Percentage": 100,
"Query_Timeout": null,
"Max_Concurrency": null,
"Soft_CPU_Limit_Percentage": 100,
"Max_Queue_Depth": null
}
]
this Is the current workload management status
[
{
"Stat": "QueuedQueries",
"Value": 0
},
{
"Stat": "Running Queries (from local queue)",
"Value": 0
},
{
"Stat": "Running Queries (from global queue)",
"Value": 0
},
{
"Stat": "Running Memory (MB) On Leaves (from local queue)",
"Value": 0
},
{
"Stat": "Running Memory (MB) On Leaves (from global queue)",
"Value": 0
},
{
"Stat": "Running Threads Per Leaf (from local queue)",
"Value": 0
},
{
"Stat": "Running Connections Per Leaf (from local queue)",
"Value": 0
},
{
"Stat": "Memory Threshold (MB) to Queue Locally",
"Value": 561
},
{
"Stat": "Memory Threshold (MB) to Queue Globally",
"Value": 28082
},
{
"Stat": "Connections Threshold to Queue Globally",
"Value": 2500
}
]
"Error: UNKNOWN_CODE_PLEASE_REPORT: Too many queries are queued for execution, exceeding the limit of 100. You may try running the query again later.
Consider reducing cluster load.The limit can be configured with the workload_management_max_queue_depth variable or max_queue_depth property of resource pool.
"Error: UNKNOWN_CODE_PLEASE_REPORT: Too many queries are queued for execution, exceeding the limit of 100. You may try running the query again later.
Consider reducing cluster load.The limit can be configured with the workload_management_max_queue_depth variable or max_queue_depth property of resource pool.
at Query.Sequence._packetToError(/usr/local/platform - js/node_modules/mysql/lib/protocol/sequences/Sequence.js: 47: 14)
at Query.ErrorPacket(/usr/local/platform - js/node_modules/mysql/lib/protocol/sequences/Query.js: 79: 18)
at Protocol._parsePacket(/usr/local/platform - js/node_modules/mysql/lib/protocol/Protocol.js: 291: 23)
at Parser._parsePacket(/usr/local/platform - js/node_modules/mysql/lib/protocol/Parser.js: 433: 10)
at Parser.write(/usr/local/platform - js/node_modules/mysql/lib/protocol/Parser.js: 43: 10)
at Protocol.write(/usr/local/platform - js/node_modules/mysql/lib/protocol/Protocol.js: 38: 16)
at Socket.< anonymous > (/usr/local/platform - js/node_modules/mysql/lib/Connection.js: 88: 28)
at Socket.< anonymous > (/usr/local/platform - js/node_modules/mysql/lib/Connection.js: 526: 10)
at Socket.emit(events.js: 315: 20)
at Socket.EventEmitter.emit(domain.js: 467: 12)
at addChunk(internal/streams/readable.js: 309: 12)
at readableAddChunk(internal/streams/readable.js: 284: 9)
at Socket.Readable.push(internal/streams/readable.js: 223: 10)
at TCP.onStreamRead(internal/stream_base_commons.js: 188: 23)
at TCP.callbackTrampoline(internal/async_hooks.js: 131: 14)
--------------------
at Protocol._enqueue(/usr/local/platform - js/node_modules/mysql/lib/protocol/Protocol.js: 144: 48)
at PoolConnection.query(/usr/local/platform - js/node_modules/mysql/lib/Connection.js: 198: 25)
at/usr/local/platform - js/node_modules /#mobile-demand/memsql/dist/index.js: 112: 32
at Ping.onOperationComplete(/usr/local/platform - js/node_modules/mysql/lib/Pool.js: 110: 5)
at Ping.< anonymous > (/usr/local/platform - js/node_modules/mysql/lib/Connection.js: 526: 10)
at Ping._callback(/usr/local/platform - js/node_modules/mysql/lib/Connection.js: 488: 16)
at Ping.Sequence.end(/usr/local/platform - js/node_modules/mysql/lib/protocol/sequences/Sequence.js: 83: 24)
at Ping.Sequence.OkPacket(/usr/local/platform - js/node_modules/mysql/lib/protocol/sequences/Sequence.js: 92: 8)
at Protocol._parsePacket(/usr/local/platform - js/node_modules/mysql/lib/protocol/Protocol.js: 291: 23)
at Parser._parsePacket(/usr/local/platform - js/node_modules/mysql/lib/protocol/Parser.js: 433: 10)
at Parser.write(/usr/local/platform - js/node_modules/mysql/lib/protocol/Parser.js: 43: 10)
at Protocol.write(/usr/local/platform - js/node_modules/mysql/lib/protocol/Protocol.js: 38: 16)
at Socket.< anonymous > (/usr/local/platform - js/node_modules/mysql/lib/Connection.js: 88: 28)
at Socket.< anonymous > (/usr/local/platform - js/node_modules/mysql/lib/Connection.js: 526: 10)
at Socket.emit(events.js: 315: 20)
at Socket.EventEmitter.emit(domain.js: 467: 12)"
my question is what is the best practice to configure the resource pool?

On SingleStore DB, you can CREATE RESOURCE POOL for new queries in addition to Setting Resource Limits both on a Cluster and/or user level.
SHOW RESOURCE POOL let's you see available resource pools from your SQL Editor.
ALTER RESOURCE POOL let's you make changes to a specific resource pool.
Resource pools can be adjusted on a per user basis with the ALTER USER command.

Related

Terraform Snowflake doesn't allow to update/remove Snowflake task

I created a task in Snowflake with Terraform. It creates it as expected and the new Task shows in both Snowflake and the .tfstate. When I try and update the task (i.e. change the schedule) and apply the changes with terraform apply, Terraform tells me:
│ Error: error retrieving root task TASK_MO: failed to locate the root node of: []: sql: no rows in result set
│
│ with snowflake_task.load_from_s3["MO"],
│ on main.tf line 946, in resource "snowflake_task" "load_from_s3":
│ 946: resource "snowflake_task" "load_from_s3" {
I did this just after creation, so no manual changes were made in Snowflake.
My assumption is that it can't find the actual task in Snowflake.
My resource
resource "snowflake_task" "load_from_s3" {
for_each = snowflake_stage.all
name = "TASK_${each.key}"
database = snowflake_database.database.name
schema = snowflake_schema.load_schemas["SRC"].name
comment = "Task to copy the ${each.key} messages from S3"
schedule = "USING CRON 0 7 * * * UTC"
sql_statement = "COPY into ${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${each.key} from (select ${local.stages[each.key].fields}convert_timezone('UTC', current_timestamp)::timestamp_ntz,metadata$filename,metadata$file_row_number from #${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${each.key} (file_format => '${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${snowflake_file_format.generic.name}')) on_error=skip_file"
enabled = local.stages[each.key].is_enabled
lifecycle {
ignore_changes = [after]
}
}
The resource in .tfstate
{
"index_key": "MO",
"schema_version": 0,
"attributes": {
"after": "[]",
"comment": "Task to copy the MO messages from S3",
"database": "ICEBERG",
"enabled": true,
"error_integration": "",
"id": "ICEBERG|SRC|TASK_MO",
"name": "TASK_MO_FNB",
"schedule": "USING CRON 0 8 * * * UTC",
"schema": "SRC",
"session_parameters": null,
"sql_statement": "COPY into ICEBERG.SRC.MO from (select $1,convert_timezone('UTC', current_timestamp)::timestamp_ntz,metadata$filename,metadata$file_row_number from #ICEBERG.SRC.MO (file_format =\u003e 'ICEBERG.SRC.GENERIC')) on_error=skip_file",
"user_task_managed_initial_warehouse_size": "",
"user_task_timeout_ms": null,
"warehouse": "",
"when": ""
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"snowflake_database.database",
"snowflake_file_format.generic",
"snowflake_schema.load_schemas",
"snowflake_stage.all"
]
},
The query that is being ran on Snowflake that (I guess) should identify the existing task. This query returns indeed zero items (which corresponds with the error message from Terraform).
SHOW TASKS LIKE '[]' IN SCHEMA "ICEBERG"."SRC"
Does anyone know what I can do to be able to update the task with Terraform?
Thanks, Chris
The issue is reported here - Existing Task in plan & apply change & error #1071. Upgrading the Provider Version to snowflake-labs/snowflake 0.37.0 should resolve the issue.

Quota exceeded for quota group 'AnalyticsDefaultGroup' and limit 'Requests per user per 100 seconds'

I'm trying to pull data from google anlytics using googleapis npm.
let res= await analyticsreporting.reports.batchGet({
requestBody: {
reportRequests: [
{
viewId: defaultProfileId,
dateRanges: dateRanges,
metrics: [
{
expression: 'ga:users',
},
{
expression: 'ga:sessions',
},
{
expression: 'ga:bounces',
],
dimensions: [
{
name: 'ga:source'
},
{
name: 'ga:medium'
},
{
name: 'ga:channelGrouping'
}
]
},
],
},
});
where dateRanges contains the date
{
startDate: "2017-01-01",
endDate: "2017-01-01",
}
to
{
startDate: "2020-05-13",
endDate: "2020-05-13",
}
When calling this, error says Error: Quota exceeded for quota group 'AnalyticsDefaultGroup' and limit 'Requests per user per 100 seconds' of service 'analyticsreporting.googleapis.com'.
How can I increase quota?
Documentation says By default, it is set to 100 requests per 100 seconds per user and can be adjusted to a maximum value of 1,000. From where it can be increase the limit quota?
There are two types of Google quotas.
project based
user based.
Project based quotas effect your full project. You can by default make 50000 request per day over your full project. This quota can be extended.
User based quotas are mostly for flood protection to ensure that your application doesnt run to fast and spam the server.
Error: Quota exceeded for quota group 'AnalyticsDefaultGroup' and limit 'Requests per user per 100 seconds' of service 'analyticsreporting.googleapis.com'.
The quota you are hitting is a user based quota you can max make 100 request per 1000 seconds. This quota can not be extended byond that point you need to slow your application down. Using exponential backoff
To increase it go to the google developer console under library -> google analytics -> manage -> quota menu and there is a penile icon near the quota in question.

convert kubernetes state metrics curl response units to megabytes

i need to get kube state metrics with Mi, it default comes with Ki. can any one please help me
[root#dte-dev-1-bizsvck8s-mst harsha]# curl http://<server IP>:8088/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/hello-kubernetes-65bc74d4b9-qp9dc
{
"kind": "PodMetrics",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"name": "hello-kubernetes-65bc74d4b9-qp9dc",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/hello-kubernetes-65bc74d4b9-qp9dc",
"creationTimestamp": "2020-04-17T12:31:59Z"
},
"timestamp": "2020-04-17T12:31:26Z",
"window": "30s",
"containers": [
{
"name": "hello-kubernetes",
"usage": {
"cpu": "0",
"memory": "20552Ki"
}
}
]
i want to get memory usage from Mi (megabytes) not the Ki. Please help me!
This unit is hardcoded in official kube-state-metrics code which shouldn't be changed. For example node metrics - especially memory usage is in Megabytes unit not Kilobytes.
To get memory usage of specific pod in Megabytes units simply execute:
kubectl top pod --namespace example-app
NAME CPU(cores) MEMORY(bytes)
app-deployment-76bf4969df-65wmd 12m 1Mi
app-deployment-76bf4969df-mmqvt 16m 1Mi
The kubectl top command returns current CPU and memory usage for a cluster’s pods or nodes, or for a particular pod or node if specified.
You can also convert received value:
1 KB = 0.001 MB (in decimal),
1 KB = 0.0009765625 MB (in binary)
Take a look: kube-state-metrics-monitoring.

How to do memory heap size test in nodejs?

I have 2 version of data.
Just array of json objects.
The same array, with references.
For example, if I have 1000 objects with the same manager:
{
"_id": 101,
"manager": {
"_id": 12160,
"name": "name"
}
}
I change the manager object to be reference.
managers[12160] = {
"_id": 12160,
"name": "name"
}
array.forEach(item => item.manager = managers[12160])
So, I want to validate that it really decrease the memory size. Any idea to how to do that?
As stated in the previous answer,
the proposed sample code most probably will not have any noticeable effect on the heap,
but you can use Node modules 'os' and 'v8' to read some stats from your environment,
here is some example:
var stats = {
'Load Average' : os.loadavg().join(' '),
'CPU Count' : os.cpus().length,
'Free Memory' : os.freemem(),
'Current Malloced Memory' : v8.getHeapStatistics().malloced_memory,
'Peak Malloced Memory' : v8.getHeapStatistics().peak_malloced_memory,
'Allocated Heap Used (%)' : Math.round((v8.getHeapStatistics().used_heap_size / v8.getHeapStatistics().total_heap_size) * 100),
'Available Heap Allocated (%)' : Math.round((v8.getHeapStatistics().total_heap_size / v8.getHeapStatistics().heap_size_limit) * 100),
'Uptime' : os.uptime()+' Seconds'
};

Neo4j cpu stuck on GC

Suddenly, after working for one month with almost no use of cpu (between 1 to 5%). The neo4j server is stuck 100% cpu on garbage collecting.
I have neo4j-entherprise 2.0.3 (not embedded) running on ubuntu 4 processors server .
this is my neo4j configuration:
wrapper:
wrapper.java.additional=-Dorg.neo4j.server.properties=conf/neo4j-server.properties
wrapper.java.additional=-Djava.util.logging.config.file=conf/logging.properties
wrapper.java.additional=-Dlog4j.configuration=file:conf/log4j.properties
#********************************************************************
# JVM Parameters
#********************************************************************
wrapper.java.additional=-XX:+UseConcMarkSweepGC
wrapper.java.additional=-XX:+CMSClassUnloadingEnabled
# Remote JMX monitoring, uncomment and adjust the following lines as needed.
# Also make sure to update the jmx.access and jmx.password files with appropriate permission roles and passwords,
# the shipped configuration contains only a read only role called 'monitor' with password 'Neo4j'.
# For more details, see: http://download.oracle.com/javase/6/docs/technotes/guides/management/agent.html
# On Unix based systems the jmx.password file needs to be owned by the user that will run the server,
# and have permissions set to 0600.
# For details on setting these file permissions on Windows see:
# http://download.oracle.com/javase/1.5.0/docs/guide/management/security-windows.html
wrapper.java.additional=-Dcom.sun.management.jmxremote.port=3637
wrapper.java.additional=-Dcom.sun.management.jmxremote.authenticate=true
wrapper.java.additional=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional=-Dcom.sun.management.jmxremote.password.file=conf/jmx.password
wrapper.java.additional=-Dcom.sun.management.jmxremote.access.file=conf/jmx.access
# Some systems cannot discover host name automatically, and need this line configured:
#wrapper.java.additional=-Djava.rmi.server.hostname=$THE_NEO4J_SERVER_HOSTNAME
# disable UDC (report data to neo4j..)
wrapper.java.additional=-Dneo4j.ext.udc.disable=true
# Uncomment the following lines to enable garbage collection logging
wrapper.java.additional=-Xloggc:data/log/neo4j-gc.log
wrapper.java.additional=-XX:+PrintGCDetails
wrapper.java.additional=-XX:+PrintGCDateStamps
wrapper.java.additional=-XX:+PrintGCApplicationStoppedTime
#wrapper.java.additional=-XX:+PrintPromotionFailure
#wrapper.java.additional=-XX:+PrintTenuringDistribution
# Uncomment the following lines to enable JVM startup diagnostics
#wrapper.java.additional=-XX:+PrintFlagsFinal
#wrapper.java.additional=-XX:+PrintFlagsInitial
# Java Heap Size: by default the Java heap size is dynamically
# calculated based on available system resources.
# Uncomment these lines to set specific initial and maximum
# heap size in MB.
#wrapper.java.initmemory=512
wrapper.java.maxmemory=3072
#********************************************************************
# Wrapper settings
#********************************************************************
# path is relative to the bin dir
wrapper.pidfile=../data/neo4j-server.pid
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.name=neo4j
defaults values:
# Default values for the low-level graph engine
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=120M
neostore.propertystore.db.mapped_memory=90M
neostore.propertystore.db.strings.mapped_memory=100M
neostore.propertystore.db.arrays.mapped_memory=100M
What can I do?
EDIT:
The store file sizes:
[
{
"description": "Information about the sizes of the different parts of the Neo4j graph store",
"name": "org.neo4j:instance=kernel#0,name=Store file sizes",
"attributes": [
{
"description": "The total disk space used by this Neo4j instance, in bytes.",
"name": "TotalStoreSize",
"value": 401188207,
"isReadable": "true",
"type": "long",
"isWriteable": "false ",
"isIs": "false "
},
{
"description": "The amount of disk space used by the current Neo4j logical log, in bytes.",
"name": "LogicalLogSize",
"value": 24957516,
"isReadable": "true",
"type": "long",
"isWriteable": "false ",
"isIs": "false "
},
{
"description": "The amount of disk space used to store array properties, in bytes.",
"name": "ArrayStoreSize",
"value": 128,
"isReadable": "true",
"type": "long",
"isWriteable": "false ",
"isIs": "false "
},
{
"description": "The amount of disk space used to store nodes, in bytes.",
"name": "NodeStoreSize",
"value": 524160,
"isReadable": "true",
"type": "long",
"isWriteable": "false ",
"isIs": "false "
},
{
"description": "The amount of disk space used to store properties (excluding string values and array values), in bytes.",
"name": "PropertyStoreSize",
"value": 145348280,
"isReadable": "true",
"type": "long",
"isWriteable": "false ",
"isIs": "false "
},
{
"description": "The amount of disk space used to store relationships, in bytes.",
"name": "RelationshipStoreSize",
"value": 114126903,
"isReadable": "true",
"type": "long",
"isWriteable": "false ",
"isIs": "false "
},
{
"description": "The amount of disk space used to store string properties, in bytes.",
"name": "StringStoreSize",
"value": 128,
"isReadable": "true",
"type": "long",
"isWriteable": "false ",
"isIs": "false "
}
],
"url": "org.neo4j/instance%3Dkernel%230%2Cname%3DStore+file+sizes"
}
]
Assuming you have 16 GB of RAM in the machine.
First thing is to set the neostore.xxx.mapped_memory settings to match the size of your store files. I'm assuming their total is 5 GB -> you have 11 GB left. See http://docs.neo4j.org/chunked/2.0.4/configuration-caches.html for more details.
Reserve some RAM for the system: 1GB -> you have 10 GB left.
Assign the remaining RAM to java heap using wrapper.java.initmemory wrapper.java.maxmemory. Set both to the same value.
If hpc is used as cache_type consider tweaking its settings based on cache hit ratio for relationships and nodes. Use JMX to monitor them, http://docs.neo4j.org/chunked/2.0.4/jmx-mxbeans.html#jmx-cache-nodecache.
We also experienced these kind of issues. In addition to configuration changes similar to what #stefan-armbruster mentioned updating Neo4j to 2.1.2 we also configured Neo4j to use G1 garbage collection instead of CMS.
Since making the garbage collection change we have seen far fewer spikes than we did previously.
If you want to give it a shot you can enable G1 GC by adding the following to your conf/neo4j-wrapper.conf file.
wrapper.java.additional=-XX:+UseG1GC
Hopefully with a combination of this and the changes suggested by #stefan-armbruster you'll resolve the issue.

Resources