ERROR 2013 (HY000): Lost connection to MySQL server during query - mysql-error-2013

A strange thing is happening which I am not able to understand
When I fire the below query I am getting error 2013:
Query: select * from userMaster order by id desc limit 5;
ERROR 2013 (HY000): Lost connection to MySQL server during query
But moment I either change the limit clause or add a where clause it starts working:
e.g. select * from userMaster order by id desc limit 5,5; - this works

This problem maybe happened when upgrade mysql5.5 to 5.6.
Just run:
mysql_upgrade -u root -p
Then it may be normal.
Reference: Upgrading MySQL

Related

Writing to localhost Postgres returning infamous"42P01 parse_relation.c" error

Use-case: I am trying to write data from a nodejs process running locally (on a docker container) to my locally running postgres server (no docker container). The nodejs process is able to connect to the server (setting the address to host.docker.internal solved that problem) however, when I attempt a simple "SELECT * FROM contact LIMIT 1" query, this error is returned:
{"type":"postgres error","request":"SELECT * FROM contact",
"error":{
"name":"error","length":106,
"severity":"ERROR",
"code":"42P01",
"position":"15",
"file":"parse_relation.c",
"line":"1376",
"routine":"parserOpenTable"}}
The relation error suggests the table is not found-- I created this table using a postgres client (postico) and have been able to successfully query the table's contents with other pg clients as well
I see multiple posts are suggesting running the sequelize db:migrate command, but would this be the right solution here?
I did not create a model nor a migration, and created the table directly in the table. Is there something else I may be overlooking that is producing this error?

Influxdb says not authorized to exectute statement

I'm facing some issues when I try to run a simple SELECT query on influxdb via the Python library.
I'm trying to run the following query:
influx_client.query('SELECT * FROM "measurements" LIMIT 10;')
Of course I switched to the according database (and connected to the server) before executing the query. Also I tried those variants of the query:
influx_client.query("SELECT * FROM \"measurements\" LIMIT 10;")
influx_client.query("SELECT * FROM 'measurements' LIMIT 10;")
influx_client.query('SELECT * FROM \'measurements\' LIMIT 10;')
influx_client.query('SELECT * FROM {0} LIMIT 10;'.format("measurements"))
influx_client.query("SELECT * FROM {0} LIMIT 10;".format("measurements"))
however they all lead to the same issue.
The result (or more the error) that I get is the following:
influxdb.exceptions.InfluxDBClientError: 403: {"error":"error authorizing query: myuser not authorized to execute statement 'SELECT * FROM \"measurements\" LIMIT 10', requires READ on True"}
I know that my user have the required permissions because when connecting to the DB with a CLI I can execute the query. On top of that I checked the permissions with SHOW GRANTS and I could see that all requirements are satisfied (the user actualy does have all privileges).
I saw some simillar issues already (for instance in this issue) however this does not fit my case since I'm quoting the query.
Informations about the environment:
InfluxDB version: 1.8.0
InfluxDB-python version: 5.3.1
Python version: 3.6.8
Operating system version: CentOS 7
Any ideas ?
There are two things you need to check for the authentication issue:
https configuration with given private key and password certificate Link
Passing the user credentials for the influx db connection (Check the case sensitivity as well.
Have used influx and these are key configuration will lead to authentication issue.
using command CLI you need to provide the user permission to the given database
Using <you-database>
GRANT ALL PRIVILEGES TO <username>
Grant Permission To User

How do I establish a connection to an OpenEdge database

I'm trying to connect to the OpenEdge database so that I can perform queries on it. However, I'm having trouble figuring out how to connect to it to make these queries.
So far I've tried making a config file, but I'm unsure how to establish a connection to the server using it:
"HMMv10": {
"dbConfig": {
"connectionString": "DRIVER={Progress OpenEdge 11.3 Driver}",
"UID": "SYSPROGRESS",
"pwd": "***",
"host": "host.local",
"port": "18210",
"db": "hmm10"
}
}
I don't know what that config file goes with or how it works but before trying to setup 3rd party software to connect to an OpenEdge database it is a really good idea to use the provided OE tools to verify that you have a properly configured connection available to connect to.
Progress provides a command line tool called "sqlexp" that you can use to test connections. The easiest way to access that tool is via a "proenv" command shell. On Windows just open "proenv" (it is installed with the other Progress commands in the "Progress" program group). If you have a Linux or UNIX install you run $DLC/bin/proenv.
Then start sqlexp like this (I used my local dbname and port number):
proenv> sqlexp -db s2k -H localhost -S 9500 -user sysprogress -password sysprogress
OpenEdge Release 11.7.2 as of Tue Oct 24 18:20:59 EDT 2017
Connecting user "sysprogress" to URL "jdbc:datadirect:openedge://localhost:9500;databaseName=s2k"... (8920)
SQLExplorer>
If there are no errors and you get the SQLExplorer> prompt that is sufficient to show that there is a working connection available.
If you would like to go the extra mile and prove that you have been granted permission to fetch data try a simple select statement:
SQLExplorer> select count(*) from pub.customer;
count(*)
--------------------
1117
SQLExplorer> quit;
proenv>
If this does not work then the local DBA has not configured SQL access, or you have incorrect configuration/credentials and no amount of working with your other tool will fix that.
OpenEdge SQL notes:
If you are not already aware - all OpenEdge data is variable length. Many SQL tools expect that the width of a field is known use metaschema data to get a default value. If the local DBA has not been maintaining that data via "dbtool" then you are likely to have errors due to data being wider than expected.
sqlexp expects commands to end with ";".
The "pub" schema is the schema where OpenEdge application tables live. Table names need to have "pub." pre-pended.
Table and field names that contain "-" will need to be quoted.
I'm putting up what I found to be the answer just in case anyone else is looking for this. Using a REST API in NodeJS I used npm-odbc and the OpenEdge odbc drivers to connect to the database. Even though the npm-odbc package only mentions Unix it does in fact work on windows.

One row test insertion to SQL Server RDS works but full load times out

I have a Glue job script that does this (not showing imports and setup here) and it inserts the row into SQL Server RDS just fine:
columns = ['test']
vals = [("test")]
df = sqlContext.createDataFrame(vals, columns)
test = DynamicFrame.fromDF(df, glueContext, "test")
datasink = glueContext.write_dynamic_frame.from_catalog(frame = test,
database = "database-name", table_name = "table-name")
job.commit()
When I run with this same connection but for a larger test load (ends up being about 100 rows) I get this error:
An error occurred while calling o596.pyWriteDynamicFrame. The TCP/IP connection to the host , port 1433 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall
The thing is that I know there's no firewall or security group issue since one row inserts just fine. I've tried adding a loginTimeout parameter to the JDBC connection like so:
jdbc:sqlserver://<host>:<port>;databaseName=dbName;loginTimeout=600;
As it indicates you can do so here. But the connection fails with Glue when I do that but succeeds when I remove the loginTimeout parameter.
I've also checked the remote timeout configuration on my SQL Server instance and it shows as 600 seconds which is longer than any of my failed jobs so it couldn't be that.
How can I get around this connection timeout error? It seems to be a limitation built into Glue.
In order to do a JDBC connection with Glue you need to follow the steps in this documentation: https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
We had done that but it turns out that our self-referencing sec group wasn't actually self-referencing. Once we changed that it got resolved
I also had to create the connection as an Amazon RDS connection and not as a JDBC connection even though it's doing the same thing under the hood.
Even after doing all that I still had issues. Turns out that you need to add the sql connection specifically to the job outside of the script. If you hit "Edit Job" you'll see a list of sql connections there. If the connection you're trying to hit isn't on the list of required connections you will always timeout

MemSQL select query 1705 error

I am using memsql community version and I am able to execute some queries but getting error for few queries like
select * from policy;
Error code:
ERROR 1705 (HY000): The query cannot be executed.
MemSQL code generation has failed: Failed to delete .fin file.
This likely indicates a permissions problem or some other issue with your installation ([install dir]/plancache may not be writable by memsqld for example). Did you install and start MemSQL via Ops?
There will probably be a more detailed error message inside of
[install dir]/tracelogs/memsql.log

Resources