Is the MemSQL reported version 5.5.8 adjustable? - security

In [MemSQL documentation FAQ page][1]
[1]: https://docs.memsql.com/v7.0/introduction/faqs/memsql-faq/, it says:
MemSQL reports the version engine variable as 5.5.8. Client drivers look for this version to determine how to interact with the server.
This is understandable, but an unfortunate side effect of this is MemSQL fails the security scan tests by security team and brings up a lots of red flags. In the same page, MemSQL says MemSQL is not necessary impacted by any of MySQL found security vulnerabilities:
The MemSQL and MySQL servers are separate database engines which do not share any code, so security issues in the MySQL server are not applicable to MemSQL.
But red flags are red flags, so I wonder if this reported version is user adjustable so that we can calm the security scan test? But also want to know what are known impacts that could be caused by changes of the reported version of this.

Yes, the "Mysql compatibility" version can be changed via the compat_version global variable. You should set it to the version string you want returned via select ##version (i.e., '8.0.20'). Keep in mind now and then client drivers and mysql applications check this version to enable\disable features so you need to test out the impact of the change on your applications.

Related

Can't connect to cassandra, authentication error, please carefully check your auth settings, retrying soon

Stuck with the below error while configuring dse address.yaml.
INFO [async-dispatch-1] 2021-04-17 07:50:06,487 Starting DynamicEnvironmentComponent
 ;;;;;
 ;;;;;
 INFO [async-dispatch-1] 2021-04-17 07:50:06,503 Starting monitored database connection.
 ERROR [async-dispatch-1] 2021-04-17 07:50:08,717 Can't connect to Cassandra, authentication error, please carefully check your Auth settings, retrying soon.
 INFO [async-dispatch-1] 2021-04-17 07:50:08,720 Finished starting system.
Configured cassandra user and password in cluster-name.conf & address.yaml as well.
Any advice would be appreciated.
You've provided very little information about the issue you're running into so our ability to assist you is very limited.
In any case, my suspicion is that (a) you haven't configured the correct seed_hosts in cluster_name.conf, or (b) the seeds are unreachable from the agents.
If this is your first time installing OpsCenter, I highly recommend that you let OpsCenter install and configure the agents automatically. When you add the cluster to OpsCenter, you will get prompted with a choice on whether to do this automatically:
For details, see Installing DataStax Agents automatically.
As a side note, we don't recommend you set any properties in address.yaml unless it's a last resort. Wherever possible and in almost all cases, configure the agent properties in cluster_name.conf so it's managed centrally instead of individual nodes.
It's difficult to help you troubleshoot your problem in a Q&A forum. If you're still experiencing issues, my suggestion is to log a DataStax Support ticket so one of our engineers can assist you directly. Cheers!

ArangoDB - Help diagnosing database corruption after system restart

I've been working with Arango for a few months now within a local, single-node development environment that regularly gets restarted for maintenance reasons. About 5 or 6 times now my development database has become corrupted after a controlled restart of my system. When it occurs, the corruption is subtle in that the Arango daemon seems to start ok and the database structurally appears as expected through the web interface (collections, documents are there). The problems have included the Foxx microservice system failing to upload my validated service code (generic 500 service error) as well as queries using filters not returning expected results (damaged indexes?). When this happens, the only way I've been able to recover is by deleting the database and rebuilding it.
I'm looking for advice on how to debug this issue - such as what to look for in log files, server configuration options that may apply, etc. I've read most of the development documentation, but only skimmed over the deployment docs, so perhaps there's an obvious setting I'm missing somewhere to adjust reliability/resilience? (this is a single-node local instance).
Thanks for any help/advice!
please note that issues like this should rather be discussed on github.

Windows client program refuses to cooperate with linux postgresql database

I have PostgreSQL database engine running on MS-Windows 7 Cz. A client program, which communicates with this pgsql database, runs the MS-Windows 7 Cz, too. The program is installed with US-English language mutation.
In this default configuration everything goes well. But I want to pair the Windows client program with the PostgreSQL database running on Linux.
The first try failed, but on both sides no reasonable information was logged.
I made some comparisons of the Linux db. crated fractions vs. the MS-Windows db. I only found the difference in the COLLATION settings. On Linux db. the default is "cs_CZ.UTF-8", but on the MS-W db. there is "Czech_Czech Republic.1250".
I do really not know what the COLLATION is for, because on the both OS in the dbs. the default internal encoding is utf-8. Does it means the that the MS-W client speaks to db. using cp1250, while the Linux db. expects the utf8? (Every explanation of COLLATIN speaks about the locale alphabet order?)
It is impossible to change the client behavior, but due to the stability and other database parameters it is highly important to migrate to the Linux server platform.
Can somebody give me some helpful guidelines, how to prepare the Linux PostgreSQL database in ought to be accepted by the MS-Windows client program? The program support team did not (cannot or may be must not) helped me over two month discussion.
Thanks in advance
Yes, finally it was very easy: the critical difference between the MS-Windows configuration and the Linux one, it was the Authentication conf module: pg_hba.conf! Its content is by default in Linux more tolerant to developer working on locahost connection:
host all all 127.0.0.1/32 trust
but in MS-Windows the connection rules are very strict:
host all all 127.0.0.1/32 md5
host all all 0.0.0.0/0 md5
The client program has no configuration file that may change its behavior.
Lots of polling with Wire-Shark was made to note such simple difference.

Datastax Enterprise - 5.0 Best Practices - Installation

I am currently evaluating the Datastax Enterprise 5 installation for my production system. There are many methods available for installation. When we choose runinstallter unattended method by DSE using option file it provide two modes
1. Service Based - Need root permission and binaries are installed in /usr/share/dse and /etc/dse.
2. No Service Based - Not need root and binaries can be installed on custom location equivalent to tar based installation without service based.
I have following questions -
Is there any best practice available which method is best suited for production installation ( in short any problem in running no service based runinstallter installation)
Is there a way we can modify runinstaller in service based installation to point to another dse home then /usr/share/dse and /etc/dse , something like /Cassandra which is owned by casandra user.
Any other best practice on the method of installation with is currently live in production without any issues.
Regards
Any of the methods specified here are fine for production installations
Not that I know of, you might want to look at using the Tarball installation if you need this level of configuration
There are a whole lot of things you need to think about when planning a cluster for DSE 5. I would start by looking at this list here.
I'm an OpsCenter developer who works on the Lifecycle Manager feature, so I'm more than a bit biased... but I think that OpsCenter LifeCycle Manager is an excellent way to install and manage DSE if you don't already have something like Chef or Ansible that you use enterprise-wide. It's a graphical webapp with a RESTful API in case you need to do any scripting of it. It deploys DSE using deb/rpm packages over SSH and can configure pretty much every DSE option there is.
As to your other questions:
Services vs no-services installations: You probably want a services-based installation. It behaves more like a "normal" linux service that can be managed with the 'service' command. A no-services install is primarily useful if you don't have root access because of very tight security policies in your org, and if you choose to go that route you'll need to decide how you want to manage DSE startup and shutdown (for clean reboots, for example).
The DSE installer can probably handle non-standard paths, but I'm not familiar enough with the details. LCM can handle some non-standard paths but not all of them (DSE will always be installed to the standard locations, for example). If you want to very tightly control every aspect of the install, tarball is your best choice. That's a lot of complexity, though, do you REALLY need to control every path?
The OpsCenter Best Practice service is probably the best list of recommended things to do in Prod, and is very easy to turn for LCM-managed clusters. But even if you don't use LCM, I recommend you set up OpsCenter so you can use the Best Practice Service.
You can find the OpsCenter install stesp at: https://docs.datastax.com/en/latest-opsc/opsc/LCM/opscLCMOverview.html.

Ops Center LCM HTTP 401 with Public DataStax Repository

I have installed Ops Center 6.0 on Ubuntu 16.04 LTS.
I am using Lifecycle Manager to provision a new DSE 5.0.3 cluster on Ubuntu 16.04 LTS using the DataStax Public repository.
Both Ops Center and the DSE cluster nodes are running in Amazon EC2
I have configured the Repository in LCM using my DataStax login credentials.
However, LCM is reporting HTTP 401 errors when attempting to access the repository.
2016-11-14 08:02:46,975 [opscenterd] ERROR: Received error from node event-subtype="meld-error" job-id="71c7e70d-3c1d-479b-b1e1-dabb71758c33" name="Cassandra1" ssh-management-address="xxx.xxx.xxx.xxx" node-id="20cbe1cc-61f3-4218-b73d-cdd71167d488" event-type="error" message="Received an HTTP 401 Unauthorized response while attempting to access the package repository. Check your repository credentials." (opscd-pool-0)
Here a couple of screenshots of the Job Details and Event Details screens:
Job Details
Event Details
I've checked that I provided the correct credentials many times now, and am pretty confident I haven't made a mistake.
Furthermore, on one of the nodes where the error is reported, I created a /etc/apt/sources.list.d/datastax.sources.list file with the same credentials, used curl to download the DataStax repository key, and successfully installed the DSE package manually. This suggests my credentials and connectivity to the DataStax repository are fine.
I'm currently a bit stuck, so if anyone can offer any help on how to resolve this it would be much appreciated.
Thanks
Austin
OpsCenter developer here, this was a newly introduced bug in OpsCenter 6.0.4. We added an assertion early in the job to verify that repository credentials were entered correnctly (it previously took longer to fail and gave a more confusing message). Unfortunately, the assertion did not correctly handle certain special characters (like the '#' sign commonly present in datastax-academy account-names). OpsCenter 6.0.5 was released yesterday afternoon as a single-fix release to address this specific issue, and we've improved our test coverage to ensure this kind of issue doesn't slip through again.
Thanks everyone for your detailed reports, this SO thread was one important source of information that helped us characterize the bug to the point where we could fix it promptly.
OpsCenter developer here, I work on LCM. It's hard to know exactly what's up given the information you provided, but some hints:
Post the full content of the job-event when. It might have useful context that you haven't otherwise provided.
Compare the /etc/apt/sources.list.d/datastax.sources.list that you created manually with /etc/apt/sources.list.d/opsc.list that LCM creates automatically. Apt requires that the credentials be provided in the URL, which means that special characters must be escaped. It's possible you have some special character in your password that needs to escaped but isn't. But even if it's not an escaping problem, comparing your manually created file and the automatically created one may give some insight as to where things are going wrong.
Ensure that you're using your Datastax academy credentials from https://academy.datastax.com/, and not something else.

Resources