Random lock timeout error on ArangoDB update - arangodb

I got this random timeout error when updating existing document.
AQL: timeout waiting to lock key Operation timed out: Timeout waiting to lock key; key: 1013 (while executing)
ArangoDB version: 3.9
ArangoJS version :8.0.0

Related

WARNING Connection pool is full, discarding connection: fapi.binance.com

WARNING Connection pool is full, discarding connection: fapi.binance.com
i created a bot with ccxt to create orders.i used multithreading for creating orders function im getting this error.

SQL Query Wait Operation times out but only when running in an Azure Function

I have an Azure Function App and Azure SQL DB.
In my Azure Function I query data (Using EF Core 3) and the following code:
var stuffIWant = dbContext.UsageCounts
.Where(a=> a.Elo > 0 && a.Elo < 1300)
.GroupBy(a => a.TheHash)
.Select(b => new stuff { Hash = (long)b.Key, Count = b.Count() })
.OrderByDescending(a => a.Count).Take(10).ToList();
I am getting a very high failure rate with an error that looks like:
[Error] Executed 'FunctionName' (Failed, Id=456456-0040-4349-81e3-54646546, Duration=30220ms)The wait operation timed out.
The exception:
[Error] Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
When I execute the function it sometimes (rarely) works fine and is able to make 8 queries like this (with the bounds for Elo changing) and each takes <200ms to complete.
I can also run this in a Sandbox project on my local machine, connecting to the same Azure DB using EF with the same model and can run it hundreds of times without ever timing out, each query taking <200ms to complete.
When the Azure function does work it always goes through each of the 8 queries and returns the data, when it doesn't work it always fails at the first query.
I added a "test" query to my function:
var test = dbContext.UsageCounts.Where(a => a.Elo > 2200).Take(10).ToList();
This happens before my failing query, and always succeeds.
Why is my function timing out most of the time when this query is nowhere near the execution time limit?
My database is not set to auto-pause.
My compute and IO utilization is under 20%
To resolve this [Error] Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding., you can try following ways:
In Tools->Options->Query Execution->SQL Server->General->Execution time-out. Set the time-out to more than 30 seconds or 0 to wait for the indefinite time.
Check connection string, e.g. connectionString="Data Source=.;Initial Catalog=LMS;User ID=test;Password=test#123"
You can refer to The wait operation timed out establishing a connection to SQL Server FROM Azure Function App, Azure Function Execution Timeout Expired and Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance

Odoo timeout killing cron

I found in logs that timeout set to 120s is killing cronworkers.
Firs issue I have noticed is that plugin which makes backups of db stuck in loop and makes zip after zip so in 1-2h disk is full.
Second thing is scheduled action called Mass Mailing: Process queue in odoo.
It should run every 60mins but it is gettin killed by timeout and run instantly after kill again
Where should I look for this timeout? I raised already all timeouts in odoo.conf to 500sec
Odoo v12 community, ubuntu 18, nginx
2019-12-02 06:43:04,711 4493 ERROR ? odoo.service.server: WorkerCron (4518) timeout after 120s
2019-12-02 06:43:04,720 4493 ERROR ? odoo.service.server: WorkerCron (4518) timeout after 120s
The following timeouts you can find in odoo.conf are usually the ones responsible for the behaviour you experience (in particular the second one).
limit_time_cpu = 60
limit_time_real = 120
Some more explanations on Odoo documentation : https://www.odoo.com/documentation/12.0/reference/cmdline.html#multiprocessing

Getting "Error initializing cluster data" in OpsCenter

I am getting following error in Opscenter, after some time the issue got resolved itself.
Error initializing cluster data: The request to
/APP_Live/keyspaces?ksfields=column_families%2Creplica_placement_strategy%2Cstrategy_options%2Cis_system%2Cdurable_writes%2Cskip_repair%2Cuser_types%2Cuser_functions%2Cuser_aggregates&cffields=solr_core%2Ccreate_query%2Cis_in_memory%2Ctiers timed out after 10 seconds..
If you continue to see this error message, you can workaround this timeout by setting [ui].default_api_timeout to a value larger than 10 in opscenterd.conf and restarting opscenterd.
Note that this is a workaround and you should also contact DataStax
Support to follow up.
Workaround of this timeout is by setting [ui].default_api_timeout to a value larger than 10 in opscenterd.conf and restarting opscenterd.

We are running a map reduce/spark job to bulk load hbase data in One of the environment

We are running a map reduce/spark job to bulk load hbase data in one of the environments.
While running it, connection to the hbase zookeeper cannot initialize throwing the following error.
16/05/10 06:36:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=c321shu.int.westgroup.com:2181,c149jub.int.westgroup.com:2181,c167rvm.int.westgroup.com:2181 sessionTimeout=90000 watcher=hconnection-0x74b47a30, quorum=c321shu.int.westgroup.com:2181,c149jub.int.westgroup.com:2181,c167rvm.int.westgroup.com:2181, baseZNode=/hbase
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Opening socket connection to server c321shu.int.westgroup.com/10.204.152.28:2181. Will not attempt to authenticate using SASL (unknown error)
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.204.24.16:35740, server: c321shu.int.westgroup.com/10.204.152.28:2181
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Session establishment complete on server c321shu.int.westgroup.com/10.204.152.28:2181, sessionid = 0x5534bebb441bd3f, negotiated timeout = 60000
16/05/10 06:36:11 INFO mapreduce.HFileOutputFormat2: Looking up current regions for table ecpdevv1patents:NormNovusDemo
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
Tue May 10 06:36:11 CDT 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller#3927df20, java.io.IOException: Call to c873gpv.int.westgroup.com/10.204.67.9:60020 failed on local exception: java.io.EOFException
We have executed the same job in Titan DEV too but facing the same problem. Please let us know if anyone has faced the same problem before.
Details are,
• Earlier job was failing to connect to localhost/127.0.0.1:2181. Hence only the property hbase.zookeeper.quorum has been set in map reduce code with c149jub.int.westgroup.com,c321shu.int.westgroup.com,c167rvm.int.westgroup.com which we got from hbase-site.xml.
• We are using jars of cdh version 5.3.3.

Resources