Error: 14 UNAVAILABLE: No connection established when connecting to gRPC Service - node.js

I deployed a gRPC service to Cloud Run and was given a service url of the form:
https://customer-service-random-string-ue.a.run.app
I now attempted to connect to that service from localhost using the following code:
let client = new customer_proto.Customer("dns:customer-service-random-string-ue.a.run.app:50051", grpc.credentials.createInsecure());
but all I keep getting back is the following error:
Error: 14 UNAVAILABLE: No connection established
What can I do differently to make this work?
Thank you

We don't know the implementation of customer_proto.Customer so it's not possible to provide a definitive answer.
It's unlikely that the method expects the endpoint to be prefixed dns:. I would expect it to want the Cloud Run service endpoint and, because gRPC doesn't have default ports, (and as I mentioned previously, Cloud Run maps services to) :443 too, i.e.
customer-service-random-string-ue.a.run.app:443
A good tool that I use to test gRPC services is gRPCurl. Postman is quite common for REST and now supports gRPC
If your service support reflection, you can:
grpcurl customer-service-random-string-ue.a.run.app:443 list
Otherwise, you'll need to reference the service's proto file(s), the service and a method (possibly with data) to test:
grpcurl \
-proto {your-proto} \
customer-service-random-string-ue.a.run.app:443 \
{package}.{Service}/{MethodName}
You can confirm that the host is defined and responds with:
nslookup customer-service-random-string-ue.a.run.app
telnet customer-service-random-string-ue.a.run.app 443

Related

Does js-ipfs have a readonly gateway server?

When I start my local ipfs node with ipfs daemon, in the cmd I get this:
Gateway (readonly) sever listening on /ip4/127.0.0.1/tcp/8080
With this, I can say 127.0.0.1:8080/ipfs/CID and read files from IPFS.
In my Node.js app, when I run ipfs.create(), in the console I get logs about swarms, but not about a readonly gateway server. I have found out that the ipfs.create() function has an option Gateway that on default is set to /ip4/127.0.0.1/tcp/9090. But when I run my node and keep my app running, when I try to retrieve something with 127.0.0.1:9090/ipfs/CID, I get an ERR_CONNECTION_REFUSED. Why is that? While the app is running, I scanned my ports and nothing was attached to 9090.
I have found the answer. Yes, js-ipfs has a readonly gateway server, but it's not starting implicitly togheter with the node, you have to use ipfs-http-gateway package. The package doesn't really have good instructions, but here is how you do it. You import HttpGateway class from the package and give your ipfs instance to it as a constructor, then you call .start() from the HttpGateway instance. The .start() will take the config options from your ipfs instance, and will search for Adresses -> Gateway options that defaults to /ip4/127.0.0.1/tcp/9090 and start the gateway to that port. You can read the code from the package where the HttpGateway class is written, and you'll figure it all out.

Wrong connection port despite Kubernetes deployments/services ports specified

It might take a while to explain what I'm trying to do but bear with me please.
I have the following infrastructure specified:
I have a job called questo-server-deployment (I know, confusing but this was the only way to access the deployment without using ingress on minikube)
This is how the parts should talk to one another:
And here you can find the entire Kubernetes/Terraform config file for the above setup
I have 2 endpoints exposed from the node.js app (questo-server-deployment)
I'm making the requests using 10.97.189.215 which is the questo-server-service external IP address (as you can see in the first picture)
So I have 2 endpoints:
health - which simply returns 200 OK from the node.js app - and this part is fine confirming the node app is working as expected.
dynamodb - which should be able to send a request to the questo-dynamodb-deployment (pod) and get a response back, but it can't.
When I print env vars I'm getting the following:
➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
so it looks like the configuration is aware of the dynamodb address and port:
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
You'll also notice in the above env variables that I specified:
DB_DOCKER_URL=questo-dynamodb-service
Which is supposed to be the questo-dynamodb-service url:port which I'm assigning to the config here (in the configmap) which is then used here in the questo-server-deployment (job)
Also, when I log:
kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
I'm getting the following results:
Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port 443 instead of 8000?
The DB_DOCKER_URL should contain the full address (with port) to the questo-dynamodb-service
What am I doing wrong here?
Edit ----
I've explicitly assigned the port 8000 to the DB_DOCKER_URL as suggested in the answer but now I'm getting the following error:
Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using https ?
Any ideas what needs to be done here?
How about specify the port in the ConfigMap:
...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
Otherwise it may default to 443.
Answering my own question in case anyone have an equally brilliant idea of running local dybamodb in a minikube cluster.
The issue was not only with the port, but also with the protocol, so the final answer to the question is to modify the ConfigMap as follows:
data = {
DB_DOCKER_URL = "http://${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000"
...
}
As a side note:
Also, when you are running various scripts to create a dynamodb table in your amazon/dynamodb-local container, make sure you use the same region for both creating the table like so:
#!/bin/bash
aws dynamodb create-table \
--cli-input-json file://questo_db_definition.json \
--endpoint-url http://questo-dynamodb-service:8000 \
--region local
And the same region when querying the data.
Even though this is just a local copy, where you can type anything you want as a value of your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and actually in the AWS_REGION as well, the region have to match.
If you query the db with a different region it was created with, you get the Cannot do operations on a non-existent table error.

Could not get connection while getPartitionedTopicMetadata - io.netty.channel.ConnectTimeoutException: connection timed out

I have a basic Pulsar app, and when I try to connect to Pulsar, I get this exception:
2021-03-10 14:38:26.107 WARN 7 --- [r-client-io-1-1]
o.a.pulsar.client.impl.ConnectionPool : Failed to open connection
to my-pulsar-server-ms-tls.domain.com:6651 :
io.netty.channel.ConnectTimeoutException: connection timed out:
my-pulsar-server-ms-tls.domain.com/10.80.13.38:6651 2021-03-10
14:38:26.212 WARN 7 --- [al-listener-3-1]
o.a.pulsar.client.impl.PulsarClientImpl : [topic:
persistent://myTenant/myNamespace/myTopic]
Could not get connection while getPartitionedTopicMetadata -- Will try
again in 100 ms
My Pulsar client is pretty basic:
PulsarClient.builder()
.serviceUrl(serviceUrl)
.authentication(AuthenticationFactory.token(authToken))
.tlsTrustCertsFilePath(serverCertificateFilePath.toString())
.enableTlsHostnameVerification(false)
.allowTlsInsecureConnection(false)
.build();
The producer is also pretty basic and looks like this:
pulsarClient.newProducer(Schema.STRING)
.topic(topic)
.create();
I've verified that the token and TLS cert are correct. I've also tried connecting a consumer from this same environment and got a similar exception, and I know that others with the same code are able to connect to the same Pulsar cluster from other environments. What is the issue?
Your connection is getting blocked by a firewall or network issue.
Verify that you can establish a connection to your endpoint my-pulsar-server-ms-tls.domain.com:6651 from your environment.
If you're able to run a network packet dump (like tcpdump), that should make it obvious if you're not able to establish a connection.
You can also try running curl my-pulsar-server-ms-tls.domain.com:6651, and if you get back some html, that means you were able to reach the server. However, if you get Could not resolve host, then you were blocked by the network configuration (such as a missing route) or firewall.

Read/Read-Write URIs for Amazon Web Services RDS

I am using HAProxy to for AWS RDS (MySQL) load balancing for my app, that is written using Flask.
The HAProxy.cfg file has following configuration for the DB
listen mysql
bind 127.0.0.1:3306
mode tcp
balance roundrobin
option mysql-check user haproxy_check
option log-health-checks
server db01 MASTER_DATABSE_ENDPOINT.rds.amazonaws.com
server db02 READ_REPLICA_ENDPOINT.rds.amazonaws.com
I am using SQLALCHEMY and it's URI is:
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#127.0.0.1:3306/DATABASE'
but when I am running an API in my test environment, the APIs that are just reading stuff from DB are executing just fine but the APIs that are writing something to DB are giving me errors mostly that:
(pymysql.err.InternalError) (1290, 'The MySQL server is running with the --read-only option so it cannot execute this statement')
I think I need to use 2 URLs now in this scenario, one for read-only operation and one for writes.
How does this work with Flask and SQLALCHEMY with HAProxy?
How do I tell my APP to use one URL for write operations and other HAProxy URL to read-only operations?
I didn't find any help from the documentation of SQLAlchemy.
Binds
Flask-SQLAlchemy can easily connect to multiple databases. To achieve
that it preconfigures SQLAlchemy to support multiple “binds”.
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#DEFAULT:3306/DATABASE'
SQLALCHEMY_BINDS = {
'master': 'mysql+pymysql://USER:PASSWORD#MASTER_DATABSE_ENDPOINT:3306/DATABASE',
'read': 'mysql+pymysql://USER:PASSWORD#READ_REPLICA_ENDPOINT:3306/DATABASE'
}
Referring to Binds:
db.create_all(bind='read') # from read only
db.create_all(bind='master') # from master

Round Robin for gRPC (nodejs) on kubernetes with headless service

I have a a 3 nodejs grpc server pods and a headless kubernetes service for the grpc service (returns all 3 pod ips with dns tested with getent hosts from within the pod). However all grpc client request always end up at a single server.
According to https://stackoverflow.com/a/39756233/2952128 (last paragraph) round robin per call should be possible Q1 2017. I am using grpc 1.1.2
I tried to give {"loadBalancingPolicy": "round-robin"} as options for new Client(address, credentials, options) and use dns:///service:port as address. If I understand documentation/code correctly this should be handed down to the c-core and use the newly implemented round robin channel creation. (https://github.com/grpc/grpc/blob/master/doc/service_config.md)
Is this how round-robin load balancer is supposed to work now? Is it already released with grpc 1.1.2?
After diving deep into Grpc-c core code and the nodejs adapter I found that it works by using the option key "grpc.lb_policy_name". Therefore, constructing the gRPC client with
new Client(address, credentials, {"grpc.lb_policy_name": "round_robin"})
works.
Note that in my original question I also used round-robin instead of the correct round_robin
I am still not completely sure how to set the serviceConfig from the service side with nodejs instead of using client (channel) option override.
I'm not sure if this helps, but this discussion shows how to implement load balancing strategies via grpc.service_config.
const options = {
'grpc.ssl_target_name_override': ...,
'grpc.lb_policy_name': 'round_robin', // <--- has no effect in grpc-js
'grpc.service_config': JSON.stringify({ loadBalancingConfig: [{ round_robin: {} }] }), // <--- but this still works
};

Resources