Py2neo Connection Refused Only with Cypher Execute - python-3.x

I'm running py2neo 2.0.4 on a remote Neo4j 2.1.6 database. I'm able to connect to the database with some commands, but not with all.
Using the same connection uri for both instances:
This works fine.
test = self.graph_db.find_one('Node')
This does not.
test = self.graph_db.cypher.execute('MATCH (n) RETURN n LIMIT 1')
Regardless of the actual contents of the query, I get the same connection refused results.

With the help of my service provider for Neo4j, we were able to determine the error and a fix.
This is a known flaw in pre-2.2 Neo4j. To resolve this error, use the py2neo rewrite function.
py2neo.rewrite(('http', '0.0.0.0', 7474, ('https', {host}, {port}))

Related

How to connect to Cassandra Database using Python code

I had followed the steps given in https://docs.datastax.com/en/developer/python-driver/3.25/getting_started/ to connect to cassandra database using python code, but still after running the code snippet I am getting
NoHostAvailable: ('Unable to connect to any servers', {'hosts"port': OperationTimedOut('errors=None, last_host=None'),
Python version 2.7 and 3 (classpath is set for both the python versions)
Java 1.8 (class path has been set)
Apache cassandra 3.11.6 (apache home classpath has been set)
I tend to use a very simple app to test connectivity to a Cassandra cluster:
from cassandra.cluster import Cluster
cluster = Cluster(['10.1.2.3'], port=45678)
session = cluster.connect()
row = session.execute("SELECT release_version FROM system.local").one()
if row:
print(row[0])
Then run it:
$ python HelloCassandra.py
4.0.6
In your comment you mentioned that you're getting OperationTimedOut which indicates that the driver never got a response back from the node within the client timeout period. This usually means (a) you're connecting to the wrong IP, (b) you're connecting to the wrong CQL port, or (c) there's a network connectivity issue between your app and the cluster.
Make sure that you're using the IP address that you've set in rpc_address of cassandra.yaml. Also make sure that the node is listening for CQL clients on the right port. You can easily verify this by checking the output of either of these Linux utilities like netstat or lsof, for example:
$ sudo lsof -nPi -sTCP:LISTEN
Cheers!
So that error message suggests that the host/port combination either does not have Cassandra running on it or is under heavy load and unable to respond.
Can you edit your question to include the Cassandra connection portion of your code, as well as maybe how you're calling it? I have a test script which I use (and you're welcome to check it out), and here is the connection portion:
protocol=4
hostname=sys.argv[1]
username=sys.argv[2]
password=sys.argv[3]
nodes = []
nodes.append(hostname)
auth_provider = PlainTextAuthProvider(username=username, password=password)
cluster = Cluster(nodes,auth_provider=auth_provider, protocol_version=protocol)
session = cluster.connect()
I call it like this:
$ python3 testCassandra.py 127.0.0.1 aaron notReallyMyPassword
local
One thing you might try too, would be to run a nodetool status on the cluster just to make sure it's running ok.
Edit
local variable 'session' referenced before assignment
So this sounds to me like you're attempting a session.execute before session = cluster.connect(). Have a look at my Git repo (linked above) to see the correct order for instantiating session.
I am not using default port
In that case, make sure the port is being set in the cluster definition. Ex:
port = 19099
cluster = Cluster(nodes,auth_provider=auth_provider, port=port)

Connecting to Azure PostgreSQL server from python psycopg2 client

I have trouble connecting to the Azure postgres database from python. I am following the guide here - https://learn.microsoft.com/cs-cz/azure/postgresql/connect-python
I have basically the same code for setting up the connection.
But the psycopg2 and SQLalchemy throw me the same error:
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I am able to connect to the instance by other client tools like dbeaver but from python it does not work.
When I investigate in Postgres logs I can see that the server actually authorized the connection but the next line says
could not receive data from client: An existing connection was forcibly closed by the remote host.
Python is 3.7
psycopg's version is 2.8.5
Azure Postgres region is in West Europe
Does someone has any suggestion on what should I try to make it work?
Thank you!
EDIT:
The issue resolved itself. I tried the same setup a few days later and it started working. Might have been something wrong with the Azure West Europe.
I had this issue too. I think I read somewhere (I forget where) that Azure has an issue with the # you have to for the username (user#serverName).
I created variables and an f-string and then it worked OK.
import sqlalchemy
username = 'user#server_name'
password = 'PassWord!'
host = 'server_name.postgres.database.azure.com'
database = 'your_database'
conn_str = f'postgresql+psycopg2://{username}:{password}#{host}/{database}'
After that:
engine = sqlalchemy.create_engine(conn_str, pool_pre_ping=True)
conn = engine.connect()
Test it with a simple SQL statement.
sql = 'SELECT * FROM public.some_table;'
results = conn.engine.execute(sql)
This was a connection in UK South. Before that it did complain about the format of the username having to use #, although the username was correct, as tested from the command line with PSQL and another SQL client.

Routing issue in neo4j 4.0 with multiple databases

I have created a neo4j and graphql application with neo4j 4.0. In my application, I used two neo4j databases. These instances run in a docker container on my PC. But When I tried to run a query using graphql playground, graphql server gives the following error.
"Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1592037819743, routers=[], readers=[], writers=[]]"
I created neo4j driver instance and session instance as following
const driver = neo4j.driver(
process.env.NEO4J_URI || "neo4j://localhost:7687",
neo4j.auth.basic(
process.env.NEO4J_USER,
process.env.NEO4J_PASSWORD
)
);
const session = driver.session(
{
database: 'mydb',
}
)
I couldn't find any way to fix this issue. Can someone help me to fix this? thank you.
If you use single server please use bolt:// as protocol. The it will not ask the server for routing tables

One row test insertion to SQL Server RDS works but full load times out

I have a Glue job script that does this (not showing imports and setup here) and it inserts the row into SQL Server RDS just fine:
columns = ['test']
vals = [("test")]
df = sqlContext.createDataFrame(vals, columns)
test = DynamicFrame.fromDF(df, glueContext, "test")
datasink = glueContext.write_dynamic_frame.from_catalog(frame = test,
database = "database-name", table_name = "table-name")
job.commit()
When I run with this same connection but for a larger test load (ends up being about 100 rows) I get this error:
An error occurred while calling o596.pyWriteDynamicFrame. The TCP/IP connection to the host , port 1433 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall
The thing is that I know there's no firewall or security group issue since one row inserts just fine. I've tried adding a loginTimeout parameter to the JDBC connection like so:
jdbc:sqlserver://<host>:<port>;databaseName=dbName;loginTimeout=600;
As it indicates you can do so here. But the connection fails with Glue when I do that but succeeds when I remove the loginTimeout parameter.
I've also checked the remote timeout configuration on my SQL Server instance and it shows as 600 seconds which is longer than any of my failed jobs so it couldn't be that.
How can I get around this connection timeout error? It seems to be a limitation built into Glue.
In order to do a JDBC connection with Glue you need to follow the steps in this documentation: https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
We had done that but it turns out that our self-referencing sec group wasn't actually self-referencing. Once we changed that it got resolved
I also had to create the connection as an Amazon RDS connection and not as a JDBC connection even though it's doing the same thing under the hood.
Even after doing all that I still had issues. Turns out that you need to add the sql connection specifically to the job outside of the script. If you hit "Edit Job" you'll see a list of sql connections there. If the connection you're trying to hit isn't on the list of required connections you will always timeout

Issue in connecting mongodatabase

I am using mongoose version 4.13.6 and mongodb from compose, and below is my code for connecting to mongo database.
mongoose.createConnection('mongodb://[user]:[pass]#[host1]:[port1],[host2]:[port2]/dbnamme?ssl=true', {});
But when I run this am getting error,
MongoError: no primary found in replicaset
Dont know why is that, can anyone help me in this?
So the short answer is this:
... all drivers are not equal and some make assumptions when multiple hosts are specified. For example, the Meteor/Node.js MongoDB driver sees two hosts and assumes it is talking to a replicaset. Upon connecting the driver asks which host is master and then errors out because neither of them are. The simple fix for this is to use one host in the URI ..
https://www.compose.com/articles/connecting-to-the-new-mongodb-at-compose/#drivingtoyourfirstdatabase
So when you create a connection, simply use one of the connection URIs for the database you want to connect to like:
var uri = "mongodb://<username>:<password>#[host]:[port]/<db_name>?ssl=true";
mongoose.createConnection(uri);

Resources