Slick 3.0 Logging Query Performance - slick

Currently I have enabled DEBUG logging on:
slick.backend
It logs the transaction start/end, compiled sql query being run by slick, and success/results.
Can slick provide me information about query running time? Which package should I enable DEBUG mode on, to get this information?
Edit:
I found this link which talks about this not being possible. Does this still hold true?

Found answer to the question with help of #szeiger. Enable logging on the following package to get timing information on execution of SQL queries in Slick 3.0:
slick.jdbc.JdbcBackend.benchmark
Other useful packages to enable DEBUG logging on:
slick.jdbc.JdbcBackend.statement
slick.jdbc.StatementInvoker.result
slick.compiler.QueryCompilerBenchmark
*Reference:*
https://github.com/slick/slick/blob/master/common-test-resources/logback.xml

Related

Spark SHC Core - Log region/regionserver

Im using the SHC spark connector by hortonworks to read an HBase table
https://github.com/hortonworks-spark/shc
I have some tasks that take a very long time to complete and I suspect its because of region size skew but would like to confirm it by logging which region/region server each task is reading.
I tried turning on debug logs by doing the following in the driver
Logger.getLogger("org").setLevel(Level.DEBUG);
Logger.getLogger("akka").setLevel(Level.DEBUG);
But it didnt seem to have any effect.
Is it possible to log the above somehow?
it didn't seem to have any effect.
Yes, unfortunately, SHC itself does not log the region/region server name information anywhere during the execution. That's why enabling DEBUG log would not help at all.
Is it possible to log the above somehow?
Yes, and only if you know where and how to customize shc's source code. You might need to insert your own log command, rebuild, test, package, and ship it with your application.
It depends on your goal. i.e. you might want to call logDebug() or logInfo() of the region name info during a task of table scanning. here is source code HBaseTableScan
The build, test, ship, .etc details are here in SHC's repo doc.

Can't connect to cassandra, authentication error, please carefully check your auth settings, retrying soon

Stuck with the below error while configuring dse address.yaml.
INFO [async-dispatch-1] 2021-04-17 07:50:06,487 Starting DynamicEnvironmentComponent
 ;;;;;
 ;;;;;
 INFO [async-dispatch-1] 2021-04-17 07:50:06,503 Starting monitored database connection.
 ERROR [async-dispatch-1] 2021-04-17 07:50:08,717 Can't connect to Cassandra, authentication error, please carefully check your Auth settings, retrying soon.
 INFO [async-dispatch-1] 2021-04-17 07:50:08,720 Finished starting system.
Configured cassandra user and password in cluster-name.conf & address.yaml as well.
Any advice would be appreciated.
You've provided very little information about the issue you're running into so our ability to assist you is very limited.
In any case, my suspicion is that (a) you haven't configured the correct seed_hosts in cluster_name.conf, or (b) the seeds are unreachable from the agents.
If this is your first time installing OpsCenter, I highly recommend that you let OpsCenter install and configure the agents automatically. When you add the cluster to OpsCenter, you will get prompted with a choice on whether to do this automatically:
For details, see Installing DataStax Agents automatically.
As a side note, we don't recommend you set any properties in address.yaml unless it's a last resort. Wherever possible and in almost all cases, configure the agent properties in cluster_name.conf so it's managed centrally instead of individual nodes.
It's difficult to help you troubleshoot your problem in a Q&A forum. If you're still experiencing issues, my suggestion is to log a DataStax Support ticket so one of our engineers can assist you directly. Cheers!

Is the MemSQL reported version 5.5.8 adjustable?

In [MemSQL documentation FAQ page][1]
[1]: https://docs.memsql.com/v7.0/introduction/faqs/memsql-faq/, it says:
MemSQL reports the version engine variable as 5.5.8. Client drivers look for this version to determine how to interact with the server.
This is understandable, but an unfortunate side effect of this is MemSQL fails the security scan tests by security team and brings up a lots of red flags. In the same page, MemSQL says MemSQL is not necessary impacted by any of MySQL found security vulnerabilities:
The MemSQL and MySQL servers are separate database engines which do not share any code, so security issues in the MySQL server are not applicable to MemSQL.
But red flags are red flags, so I wonder if this reported version is user adjustable so that we can calm the security scan test? But also want to know what are known impacts that could be caused by changes of the reported version of this.
Yes, the "Mysql compatibility" version can be changed via the compat_version global variable. You should set it to the version string you want returned via select ##version (i.e., '8.0.20'). Keep in mind now and then client drivers and mysql applications check this version to enable\disable features so you need to test out the impact of the change on your applications.

ArangoDB - Help diagnosing database corruption after system restart

I've been working with Arango for a few months now within a local, single-node development environment that regularly gets restarted for maintenance reasons. About 5 or 6 times now my development database has become corrupted after a controlled restart of my system. When it occurs, the corruption is subtle in that the Arango daemon seems to start ok and the database structurally appears as expected through the web interface (collections, documents are there). The problems have included the Foxx microservice system failing to upload my validated service code (generic 500 service error) as well as queries using filters not returning expected results (damaged indexes?). When this happens, the only way I've been able to recover is by deleting the database and rebuilding it.
I'm looking for advice on how to debug this issue - such as what to look for in log files, server configuration options that may apply, etc. I've read most of the development documentation, but only skimmed over the deployment docs, so perhaps there's an obvious setting I'm missing somewhere to adjust reliability/resilience? (this is a single-node local instance).
Thanks for any help/advice!
please note that issues like this should rather be discussed on github.

Dynamically Changing Hazelcast Server Log Level

I am using client - server mode of Hazelcast. Is it possible to control the logging level of Hazelcast server dynamically from Hazelcast client ?. My intention is that, by default I will start Hazelcast server in ERROR mode and in case of any problem, I want to change the log level to DEBUG mode without restarting the Hazelcast server.
Thanks
JK
Hazelcast does not depend on any custom logging frameworks and makes use of adaptors to connect to a number of existing logging frameworks. See some details here:
http://docs.hazelcast.org/docs/3.5/manual/html/logging.html
Most of the current logging frameworks allow you to dynamically / programmatically change the log levels. I'm at a loss here, since you haven't given any details of the logging framework you have used.
For example :
LogManager.getLogger("loggername").setLevel(newLoglevel);
will achieve whatever you are looking for. You can also change logj configuration file (logj.xml) in runtime and the changes will be in effect without restarting any of the hazelcast servers.

Resources