Does the Google Spanner Emulator work with the Google Spanner JDBC Driver? - google-cloud-spanner

I have tried this isn DBeaver and DataGrip.
Running with the Google Spanner Emulator locally (0.8.0)
export SPANNER_EMULATOR_HOST=localhost:9010
Executing: docker run -p 127.0.0.1:9010:9010 -p 127.0.0.1:9020:9020 gcr.io/cloud-spanner-emulator/emulator:0.8.0
[cloud-spanner-emulator] 2020/07/17 22:23:21 gateway.go:135: Cloud Spanner emulator running.
[cloud-spanner-emulator] 2020/07/17 22:23:21 gateway.go:136: REST server listening at 0.0.0.0:9020
[cloud-spanner-emulator] 2020/07/17 22:23:21 gateway.go:137: gRPC server listening at 0.0.0.0:9010
Will this work with the Google JDBC Spanner Driver?
In my testing, my guess is:
No this is not currently supported.
I can manage to connect to a GCP instance of spanner, but not the emulator. When I try with port 9010 or 9020 it basically hangs.
My jdbc connection strings are as follows (the project, instance and the database have all been created):
gcloud spanner databases list --project=local-project --instance=local-instance --configuration=spanner-emulator --format json
[
{
"name": "projects/local-project/instances/local-instance/databases/myDatabase",
"state": "READY"
},
]
# 9010
jdbc:cloudspanner://localhost:9010/projects/local-project/instances/local-instance/databases/myDatabase
# 9020
jdbc:cloudspanner://localhost:9020/projects/local-project/instances/local-instance/databases/myDatabase
# just the host
jdbc:cloudspanner://localhost/projects/local-project/instances/local-instance/databases/myDatabase

The emulator does not use TLS, while the JDBC driver will use that by default. You can turn off TLS for the JDBC driver by setting the usePlainText connection property to true. The following connection URL should work:
jdbc:cloudspanner://localhost:9010/projects/local-project/instances/local-instance/databases/myDatabase?usePlainText=true

Related

Trying to connect from DataStax Studio to my Astra cluster - Connection test failed

Just got my brand new 6.8 DataStax Astra (Cassandra) and downloaded studio from https://www.datastax.com/dev/datastax-studio. My Nodejs connection works great. But trying connect from Studio - everything fails with SSL configurations:
All host(s) tried for query failed.. (com.datastax.driver.core.exceptions.TransportException:
It looks like port should be 29080 as per secure-connect .. /config.json
I used API User Admin Token for Client and Secret keys.
Not sure if it's related but python connection fails with:
...cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers'
...Unauthorized('Error from server: code=2100 [Unauthorized] message="No SELECT permission on <table system_virtual_schema.keyspaces>"')
It can't connect to Astra because it's connecting to the wrong CQL port.
The correct port configuration is in the cqlshrc file in the [connection] section. For example:
[connection]
hostname = db-uuid-us-east1.db.astra.datastax.com
port = 39876
ssl = true
This is the correct CQL port to use to connect from Studio or other clients. Cheers!
Cassandra 6.8 Astra is currently using port 29042 by default.
Both Python connector and Studio work great under Admin User credentials, but not under the Admin API User. So the authorization error message was ligid. NodeJS works great with Admin API User credentials..

Connect Cadence to Azure Cosmo Cassandra API

I am running cadence with cassandra externally running using docker run -e CASSANDRA_SEEDS=10.x.x.x e ubercadence/server:. and its running sucessfully.
Azure cosmos says, any system running on Cassandra can use Azure cosmos using provided cosmos cassandra APi, by modifying the client connection creation code, for example : GO app sample code :
func GetSession(cosmosCassandraContactPoint, cosmosCassandraPort, cosmosCassandraUser, cosmosCassandraPassword string) *gocql.Session {
clusterConfig := gocql.NewCluster(cosmosCassandraContactPoint)
port, err := strconv.Atoi(cosmosCassandraPort)
clusterConfig.Authenticator = gocql.PasswordAuthenticator{Username: cosmosCassandraUser, Password: cosmosCassandraPassword}
clusterConfig.Port = port
clusterConfig.SslOpts = &gocql.SslOptions{Config: &tls.Config{MinVersion: tls.VersionTLS12}}
clusterConfig.ProtoVersion = 4
session, err := clusterConfig.CreateSession()
...
return session
}
From my end, I can connect external cassandra's cqlsh(which cadence is using for persisting) to azure cosmos and can create KeySpace, table in azure cosmo db.
However, when I run Cadence server, all new tables are still created on local cassandra itself(instead of Axure cosmos) might be, cadence is connected to cassandra only.
So there are basically 2 question shared below :
1.Since cadence is written in GO, can we modify the source code to establish connection to AzureCosmoDb.
or
or can we pass the cosmocassandra's host, port, username, password, while running the cassandra and cadence separately (docker run -e CASSANDRA_SEEDS=10.x.x.x e ubercadence/server:)
cosmosCassandraContactPoint : xyz.cassandra.cosmos.azure.com
cosmosCassandraPort : 10350
cosmosCassandraUser : xyz
cosmosCassandraPassword : xyz
I am actively working on supporting other NoSQL DBs: https://github.com/uber/cadence/issues/3514 it will be easier to use Azure cosmos/AWS Keyspace after that's done.
Basically we will just need to customize a small part from the existing Cassandra model.

How to run a http server on EMR master node of a Spark application

I have a Spark streaming application (Spark 2.4.4) running on AWS EMR 5.28.0. In the driver application on master node, besides setting up the spark streaming job, I am also running a http server (Akka-http 10.1.6) which can query the driver application for data, I bind to port 6161 like the following:
val bindingFuture: Future[ServerBinding] = Http().bindAndHandle(myapiroutes, "127.0.0.1", 6161)
try {
bindingFuture.map { serverBinding =>
log.info(s"AlertRestApi bound to ${serverBinding.localAddress}")
}
} catch {
case ex: Exception => {
log.error(s"Failed to bind to 127.0.0:6161")
system.terminate()
}
}
then I start spark streaming:
ssc.start()
When I test this on local spark, I am able to access http://localhost:6161/myapp/v1/data and get data from spark streaming, everything is good so far.
However, when I run this application in AWS EMR, I could not access port 6161. I ssh into the driver node and try to curl my url, it gives me error message:
[hadoop#ip-xxx-xx-xx-x ~]$ curl http://xxx.xx.xx.x:6161/myapp/v1/data
curl: (7) Failed to connect to xxx.xx.xx.x port 6161: Connection refused
when I look into the log in the driver node, I do see the port is bound (why the host shows 0:0:0:0:0:0:0:0? I don't know, that is the way in my dev testing, and it works, I see the same log and able to access the url):
20/04/13 16:53:26 INFO MyApp: MyRestApi bound to /0:0:0:0:0:0:0:0:6161
So my question is, what should I do so that I can access the api at port 6161 on the driver node? I realize Yarn resource manager may be involved but I know nothing about Yarn resource manager to point myself where to investigate.
Please help. Thanks
You are mentioning 127.0.0.1 as the host name or 0.0.0.0??
127.0.0.1 will work in your local system but not in AWS as it is loopback address. In such case you need to use 0.0.0.0 as the host name
Also make sure that ports are open and access is provided from your IP. To do that, go to Inbound rules for your instance and add 6161 under custom TCP rule if not done already.
Let me know if this makes any difference

Using mLab on Google App Engine through Wiki.js (Connection timeout)

I'm trying to set up a Node.js application using wiki.js: https://github.com/Requarks/wiki
I just started with Google App Engine and using their Cloud Shell. I installed Wiki.js using their bash command and ran a command that ran a server on port 3000 which I then used Google's shell to view.
It brings you to Wiki.js' configuration UI where you insert info including the mongodb connection string.
I have a DB in mlab set up with a user. I tested it locally to make sure I can connect, but when I try to do this process through the Google App Engine "Web Preview" they have on their command line, on the step where I am to add the mongodb connection string, it times out, giving the error:
Error: failed to connect to server [secret.mlab.com:secret] on first connect [MongoError: connection 8 to secret.mlab.com:secret timed out]
(i hid the actual address)
So i'm wondering if I'm missing something with this Google App Engine, since this connects successfully on my localhost

Read/Read-Write URIs for Amazon Web Services RDS

I am using HAProxy to for AWS RDS (MySQL) load balancing for my app, that is written using Flask.
The HAProxy.cfg file has following configuration for the DB
listen mysql
bind 127.0.0.1:3306
mode tcp
balance roundrobin
option mysql-check user haproxy_check
option log-health-checks
server db01 MASTER_DATABSE_ENDPOINT.rds.amazonaws.com
server db02 READ_REPLICA_ENDPOINT.rds.amazonaws.com
I am using SQLALCHEMY and it's URI is:
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#127.0.0.1:3306/DATABASE'
but when I am running an API in my test environment, the APIs that are just reading stuff from DB are executing just fine but the APIs that are writing something to DB are giving me errors mostly that:
(pymysql.err.InternalError) (1290, 'The MySQL server is running with the --read-only option so it cannot execute this statement')
I think I need to use 2 URLs now in this scenario, one for read-only operation and one for writes.
How does this work with Flask and SQLALCHEMY with HAProxy?
How do I tell my APP to use one URL for write operations and other HAProxy URL to read-only operations?
I didn't find any help from the documentation of SQLAlchemy.
Binds
Flask-SQLAlchemy can easily connect to multiple databases. To achieve
that it preconfigures SQLAlchemy to support multiple “binds”.
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#DEFAULT:3306/DATABASE'
SQLALCHEMY_BINDS = {
'master': 'mysql+pymysql://USER:PASSWORD#MASTER_DATABSE_ENDPOINT:3306/DATABASE',
'read': 'mysql+pymysql://USER:PASSWORD#READ_REPLICA_ENDPOINT:3306/DATABASE'
}
Referring to Binds:
db.create_all(bind='read') # from read only
db.create_all(bind='master') # from master

Resources