I'm trying to integrate my service with AWS Cassandra (Keyspaces) with the following config:
cassandra:
default:
advanced:
ssl: true
ssl-engine-factory: DefaultSslEngineFactory
metadata:
schema:
enabled: false
auth-provider:
class: PlainTextAuthProvider
username: "XXXXXX"
password: "XXXXXX"
basic:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
load-balancing-policy:
local-datacenter: "${CASSANDRA_DATA_CENTER}:datacenter1"
session-keyspace: "keyspace"
Whenever I'm running the service it fails to load with the following error:
Message: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142, hostId=null, hashCode=7296b27b): [com.datastax.oss.driver.api.core.DriverTimeoutException: [s0|control|id: 0x1f1c50a1, L:/172.17.0.3:54802 - R:cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142] Protocol initialization request, step 1 (OPTIONS): timed out after 5000 ms]
There's very little documentation about the cassandra-micronaut library, so I'm not sure what I'm doing wrong here.
UPDATE:
For clarity: the values of our environment variables are as follow:
export CASSANDRA_HOST=cassandra.eu-west-1.amazonaws.com
export CASSANDRA_PORT=9142
export CASSANDRA_DATA_CENTER=eu-west-1
Note that even when I've hard-coded the values into my application.yml the problem continued.
I think you need to adjust your variables in this example. The common syntax for Apache Cassandra or Amazon Keyspaces is host:port. For Amazon Keyspaces the port is always 9142.
Try the following:
contact-points:
- ${CASSANDRA_HOST}:${CASSANDRA_PORT}
or simply hard code them at first.
contact-points:
- cassandra.eu-west-1.amazonaws.com:9142
So this:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
Doesn't match up with this:
Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142,
Double-check which IP(s) and port Cassandra is broadcasting on (usually seen with nodetool status) and adjust the service to not look for it on 127.0.0.1.
Related
Using DigitalOcean's managed Postgres database cluster and App Platform, I want to connect my NodeJS app to my Postgres database.
At the moment, I'm getting the time out error below. As part of debugging the misconfiguration, I want to verify that I'm using the bindable app environment variables correctly.
Here are some basic details. Using this information, can you help me to understand how to construct the bindable variable?
Database Cluster Name: demo
Database Pool Name: Demo
Example database Name: defaultdb
App Platform > App Name: demo1
I've tried an assortment of combinations as shown below. When I run the echo $MY_VAR command, none of these values are interpolated. I see the bindable variable syntax.
# Test using database cluster name only.
apps#client:~$ echo $TEST1
${demo.HOSTNAME}
# Test using _self, or the current context. I believe this is the app context.
apps#client:~$ echo $TEST2
${_self.HOSTNAME}
# Test using syntax of <db_cluster_name>.<db_pool_name>.<env_var>
apps#client:~$ echo $TEST3
${demo.Demo.HOSTNAME}
# Test using syntax of <app_name>.<db_pool_name>.<env_var>
apps#client:~$ echo $TEST4
${demo1.Demo.HOSTNAME}
Most of these are a bit ridiculous, and at this point, I'm just experimenting with various combinations. Can you please help me to understand the syntax that would output the database hostname when I run echo $DB_HOSTNAME from the Digital Ocean app console? Thank you.
Why am I trying to confirm the bindable variable syntax?
I'd like to use the app environment variables for databases. Because of the connection refused, I believe the issue might be related to the CA certificate not being available as part of the Postgres connection details. Since my syntax isn't resolving, the CA cert isn't available when connecting to Postgres in my code.
The bindable variables should be populated in Node's environment variables and available through process.env.
# Sample postgres configuration
const postgresConfig = {
user: process.env.PG_USER,
password: process.env.PG_PASSWORD,
host: process.env.PG_HOST,
database: process.env.PG_DATABASE,
port: process.env.PG_PORT,
ssl: {
require: true,
rejectUnauthorized: true,
ca: process.env.PG_CA_CERT,
},
};
Error message
When attempting to run a Postgres query, I get this error:
Error: connect ETIMEDOUT 10.x.y.z:25061
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
errno: -110,
code: 'ETIMEDOUT',
syscall: 'connect',
address: '10.x.y.x',
port: 25061
}
I see how it works now. When you are managing your App Platform application, under the Create menu button is the option to "Create/Attach Database".
I didn't look here because I didn't think I would need to use the "Create" button to attach an existing database. It would be more intuitive to find this feature under the "Actions" menu. I fell victim to some peculiar UI design decisions.
Once the database is attached, all is well in the world.
It might take a while to explain what I'm trying to do but bear with me please.
I have the following infrastructure specified:
I have a job called questo-server-deployment (I know, confusing but this was the only way to access the deployment without using ingress on minikube)
This is how the parts should talk to one another:
And here you can find the entire Kubernetes/Terraform config file for the above setup
I have 2 endpoints exposed from the node.js app (questo-server-deployment)
I'm making the requests using 10.97.189.215 which is the questo-server-service external IP address (as you can see in the first picture)
So I have 2 endpoints:
health - which simply returns 200 OK from the node.js app - and this part is fine confirming the node app is working as expected.
dynamodb - which should be able to send a request to the questo-dynamodb-deployment (pod) and get a response back, but it can't.
When I print env vars I'm getting the following:
➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
so it looks like the configuration is aware of the dynamodb address and port:
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
You'll also notice in the above env variables that I specified:
DB_DOCKER_URL=questo-dynamodb-service
Which is supposed to be the questo-dynamodb-service url:port which I'm assigning to the config here (in the configmap) which is then used here in the questo-server-deployment (job)
Also, when I log:
kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
I'm getting the following results:
Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port 443 instead of 8000?
The DB_DOCKER_URL should contain the full address (with port) to the questo-dynamodb-service
What am I doing wrong here?
Edit ----
I've explicitly assigned the port 8000 to the DB_DOCKER_URL as suggested in the answer but now I'm getting the following error:
Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using https ?
Any ideas what needs to be done here?
How about specify the port in the ConfigMap:
...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
Otherwise it may default to 443.
Answering my own question in case anyone have an equally brilliant idea of running local dybamodb in a minikube cluster.
The issue was not only with the port, but also with the protocol, so the final answer to the question is to modify the ConfigMap as follows:
data = {
DB_DOCKER_URL = "http://${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000"
...
}
As a side note:
Also, when you are running various scripts to create a dynamodb table in your amazon/dynamodb-local container, make sure you use the same region for both creating the table like so:
#!/bin/bash
aws dynamodb create-table \
--cli-input-json file://questo_db_definition.json \
--endpoint-url http://questo-dynamodb-service:8000 \
--region local
And the same region when querying the data.
Even though this is just a local copy, where you can type anything you want as a value of your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and actually in the AWS_REGION as well, the region have to match.
If you query the db with a different region it was created with, you get the Cannot do operations on a non-existent table error.
I've installed Prometheus operator 0.34 (which works as expected) on cluster A (main prom)
Now I want to use the federation option,I mean collect metrics from other Prometheus which is located on other K8S cluster B
Secnario:
have in cluster A MAIN prometheus operator v0.34 config
I've in cluster B SLAVE prometheus 2.13.1 config
Both installed successfully via helm, I can access to localhost via port-forwarding and see the scraping results on each cluster.
I did the following steps
Use on the operator (main cluster A) additionalScrapeconfig
I've added the following to the values.yaml file and update it via helm.
additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target prometheus on Cluster B
I took the target like following:
on prometheus inside cluster B (from which I want to collect the data) I use:
kubectl get svc -n monitoring
And get the following entries:
Took the EXTERNAL-IP and put it inside the additionalScrapeConfigs config entry.
Now I switch to cluster A and run kubectl port-forward svc/mon-prometheus-operator-prometheus 9090:9090 -n monitoring
Open the browser with localhost:9090 see the graph's and click on Status and there Click on Targets
And see the new target with job federate
Now my main question/gaps. (security & verification)
To be able to see that target state on green (see the pic) I configure the prometheus server in cluster B instead of using type:NodePort to use type:LoadBalacer which expose the metrics outside, this can be good for testing but I need to secure it, how it can be done ?
How to make the e2e works in secure way...
tls
https://prometheus.io/docs/prometheus/1.8/configuration/configuration/#tls_config
Inside cluster A (main cluster) we use certificate for out services with istio like following which works
tls:
mode: SIMPLE
privateKey: /etc/istio/oss-tls/tls.key
serverCertificate: /etc/istio/oss-tls/tls.crt
I see that inside the doc there is an option to config
additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target
# tls_config:
# ca_file: /opt/certificate-authority-data.pem
# cert_file: /opt/client-certificate-data.pem
# key_file: /sfp4/client-key-data.pem
# insecure_skip_verify: true
But not sure which certificate I need to use inside the prometheus operator config , the certificate of the main prometheus A or the slave B?
You should consider using Additional Scrape Configuration
AdditionalScrapeConfigs allows specifying a key of a Secret
containing additional Prometheus scrape configurations. Scrape
configurations specified are appended to the configurations generated
by the Prometheus Operator.
I am affraid this is not officially supported. However, you can update your prometheus.yml section within the Helm chart. If you want to learn more about it, check out this blog
I see two options here:
Connections to Prometheus and its exporters are not encrypted and
authenticated by default. This is one way of fixing that with TLS
certificates and
stunnel.
Or specify Secrets which you can add to your scrape configuration.
Please let me know if that helped.
A couple of options spring to mind:
Put the two clusters in the same network space and put a firewall in-front of them
VPN tunnel between the clusters.
Use istio multicluster routing (but this could get complicated): https://istio.io/docs/setup/install/multicluster
I am developing a jhipster project with elasticsearch. As Using Elasticsearch page describes, I used the spring.data.jest.uri property for production use as follows:
spring:
......... # omitted to keep short
data:
mongodb:
uri: mongodb://localhost:27017
database: PROJ1
jest:
uri: http://172.20.100.2:9200
What I want is, give more than one uri for elasticsearch because I have set up a 3-node cluster. If one node goes down, another alive node should be used. Is such a configuration possible and if possible how sould I do it?
I am running Cassandra on Kubernetes (3 instances) and want to expose it to the outside, my application is not yet in Kubernetes. So i crated a load balanced service like so:
apiVersion: v1
kind: Service
metadata:
namespace: getquanty
labels:
app: cassandra
name: cassandra
annotations:
kubernetes.io/tls-acme: "true"
spec:
clusterIP:
ports:
- port: 9042
name: cql
nodePort: 30001
- port: 7000
name: intra-node
nodePort: 30002
- port: 7001
name: tls-intra-node
nodePort: 30003
- port: 7199
name: jmx
nodePort: 30004
selector:
app: cassandra
type: LoadBalancer
This is the result is:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra 10.55.249.88 GIVEN_IP_GCE_LB 9042:30001/TCP,7000:30002/TCP,7001:30003/TCP,7199:30004/TCP 26m
I am able to connect using sh (cqlsh GIVEN_IP_GCE_LB ) but when i try to add data to Cassandra using the datastax driver for node, i got this:
message: 'Cannot achieve consistency level SERIAL',
info: 'Represents an error message from the server',
code: 4096,
consistencies: 8,
required: 1,
alive: 0,
coordinator: '35.187.166.68:9042' },
'10.52.4.32:9042': 'Host considered as DOWN',
'10.52.2.15:9042': 'Host considered as DOWN' },
info: 'Represents an error when a query cannot be performed because no host is available or could be reached by the driver.',
message: 'All host(s) tried for query failed. First host tried, 35.187.166.68:9042: ResponseError: Cannot achieve consistency level SERIAL. See innerErrors.' }
My first though was I need to expose the other ports too, so I did (intra-node, tls-intra-node, jmx), but it was the same error.
Kubernetes gives you access to proxy, i tried to proxy from my machine using the constructed URL for the pod to test if i have access but i cannot connect using cqlsh:
http://127.0.0.1:8001/api/v1/namespaces/qq/pods/cassandra-0:cql/proxy
I am out of ideas, the one thing left to try is to expose every instance (make a service for every instance) which is very ugly, but it will let me connect to the nodes from the outside until i migrate the application to Kubernetes.
Does any one have ideas how to expose Cassandra nodes to the internet and make the Datastax driver aware of all the nodes? Thank you for your time.
After more reading I found out that the replication strategy was the one causing the problem, NetworkStrategy is suitable for multi-cluster, I have one, so I changed the replication to simple with the number of nodes i had, now every thing works as expected.
EDIT 1:
Putting databases on Kube is not a good solution, I ended up making a standalone cluster, added it to the same Network as kube, and was able to access it from kube pods.
Kube is made to manage application and make them 'elastic', i don't think people really need to scale databases as quick as applications, furthermore, the scaling of a database is not the same operation as a stateless application.
You need to use headless service for the replication controller you created.
Your service should be something like :
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
Also, you can take reference for the below link and bring up a cassandra cluster.
https://github.com/kubernetes/kubernetes/tree/master/examples/storage/cassandra
I would recommend to run cassandra pod via replication controller or statefulset or daemonset because then kubernetes manages restart/rescheduling of the pod whenever required.