Does anyone try to config Grafana Loki using Amazon Keyspaces for index and AWS S3 for chunk storage?
I keep getting protocol version mismatch error.
After researching for few hours i got it running. So the issue was basically host SSL verification, which set true by default.
The below configs works perfectly with aws keyspace + loki docker grafana/loki:master-1664a98 and Cassandra config
client_configs+: {
cassandra+: {
consistency: 'LOCAL_QUORUM',
port: 9142,
host_verification: false,
disable_initial_host_lookup: false,
SSL: true,
timeout: "60s",
connect_timeout: "3m0s"
}
},
You can choose LOCAL_ONE as consistency as per your demand.
Related
I'm trying to integrate my service with AWS Cassandra (Keyspaces) with the following config:
cassandra:
default:
advanced:
ssl: true
ssl-engine-factory: DefaultSslEngineFactory
metadata:
schema:
enabled: false
auth-provider:
class: PlainTextAuthProvider
username: "XXXXXX"
password: "XXXXXX"
basic:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
load-balancing-policy:
local-datacenter: "${CASSANDRA_DATA_CENTER}:datacenter1"
session-keyspace: "keyspace"
Whenever I'm running the service it fails to load with the following error:
Message: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142, hostId=null, hashCode=7296b27b): [com.datastax.oss.driver.api.core.DriverTimeoutException: [s0|control|id: 0x1f1c50a1, L:/172.17.0.3:54802 - R:cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142] Protocol initialization request, step 1 (OPTIONS): timed out after 5000 ms]
There's very little documentation about the cassandra-micronaut library, so I'm not sure what I'm doing wrong here.
UPDATE:
For clarity: the values of our environment variables are as follow:
export CASSANDRA_HOST=cassandra.eu-west-1.amazonaws.com
export CASSANDRA_PORT=9142
export CASSANDRA_DATA_CENTER=eu-west-1
Note that even when I've hard-coded the values into my application.yml the problem continued.
I think you need to adjust your variables in this example. The common syntax for Apache Cassandra or Amazon Keyspaces is host:port. For Amazon Keyspaces the port is always 9142.
Try the following:
contact-points:
- ${CASSANDRA_HOST}:${CASSANDRA_PORT}
or simply hard code them at first.
contact-points:
- cassandra.eu-west-1.amazonaws.com:9142
So this:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
Doesn't match up with this:
Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142,
Double-check which IP(s) and port Cassandra is broadcasting on (usually seen with nodetool status) and adjust the service to not look for it on 127.0.0.1.
I have a multi-stack application where I want to deploy an RDS in one stack and then in a later stack deploy a Fargate cluster that connects to the RDS.
Here is how the rds gets defined:
this.rdsSG = new ec2.SecurityGroup(this, `ecsSG`, {
vpc: props.vpc,
allowAllOutbound: true,
});
this.rdsSG.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(5432), 'Ingress 5432');
this.aurora = new rds.ServerlessCluster(this, `rds`, {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
vpc: props.vpc,
securityGroups: [this.rdsSG],
// more properties below
});
With that add ingress rule everything is fine, since both the RDS and Fargate are in the same VPC, I can communicate fine. It worries me making that open the world even though its in its own VPC.
const ecsSG = new ec2.SecurityGroup(this, `ecsSG`, {
vpc: props.vpc,
allowAllOutbound: true,
});
const service = new ecs.FargateService(this, `service`, {
cluster,
desiredCount: 1,
taskDefinition,
securityGroups: [ecsSG],
assignPublicIp: true,
});
How can I remove the ingress rule and allow inbound connections to the RDS from that ecsSG since it gets deployed later? If I try to call the following command from the deploy stack, I get a cyclic dependency error:
props.rdsSG.connections.allowFrom(ecsSG, ec2.Port.allTcp(), 'Aurora RDS');
Thanks for your help!
This turned out to be easier than I thought- you can just flip the connection so that rather than trying to modify the rds to accept a security group of the ecs, you use the allowTo to establish a connection to the rds instance.
ecsSG.connections.allowTo(props.rds, ec2.Port.tcp(5432), 'RDS Instance');
Also maybe the other way round the RDS security group might be better described by aws_rds module rather than aws_ec2 module https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_rds/CfnDBSecurityGroup.html (couldn't post a comment due to low rep)
Just as an additional possibility here. What works for me is that I don't need to define any security group. Just the service and the db, and connect the two:
const service = new ecsPatterns.ApplicationLoadBalancedEc2Service(
this,
'app-service',
{
cluster,
...
},
);
const dbCluster = new ServerlessCluster(this, 'DbCluster', {
engine: dbEngine,
...
});
dbCluster.connections.allowDefaultPortFrom(service.service);
I'm currently trying to connect a Node JS app to a single database that I created using the Azure SQL Database. In order to connect to the database, I use Sequelize. In order to do that, I set up the firewall to accept my IP address as explained here, and I configured a config.json file like so
"username": "SERVER_ADMIN_NAME#MY_IP_ADDRESS",
"password": "ADMIN_PASSWORD",
"database": "DATABASE_NAME",
"host": "SERVER_NAME",
"port": 1433,
"dialect": "mssql",
"dialectOptions": {
"options": {
"encrypt": true
}
}
However, after running the application it fails to connect to the database and returns the following message
"Cannot open server '211' requested by the login. Client with IP address 'MY_IP_ADDRESS' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect."
I've already waited for more than 5 minutes but the result is still the same. Now, the first thing that came into my mind was how I provided the values for the config.json file. However, after checking the sys.database_firewall_rules using the following query
SELECT * FROM sys.database_firewall_rules;
The table was EMPTY. From here on I'm not really sure what I'm supposed to do. I was wondering if anybody could point out what I was missing? Thanks in advance!
You should not connect to Azure SQL Database using the IP address because it can change any time.
Could you try a connection like below using tedious driver?
var Sql = require('sequelize');
var sql = new Sql('dbname', 'UserName#server', 'password', {
host: 'server.database.windows.net',
dialect: 'mssql',
driver: 'tedious',
options: {
encrypt: true,
database: 'dbname'
},
port: 1433,
pool: {
max: 5,
min: 0,
idle: 10000
}
});
Make sure you are adding your public IP address not your local IP address to the firewall rules. To verify the firewall rules you have added already, please run the following query:
SELECT * FROM sys.firewall_rules;
The above query shows rules at the server level. You have created your rules at that level.
When using the elasticsearch client (from the elasticsearch npm version 15.4.1), the AWS elasticsearch service complains about an Invalid Host Header. This happens for every request even though they work.
I double-checked the configuration for initializing the elasticsearch client and the parameter "host" is correctly formed.
let test = require('elasticsearch').Client({
host: 'search-xxx.us-west-1.es.amazonaws.com',
connectionClass: require('http-aws-es')
});
I expected to get a clean ElasticsearchRequest without a corresponding InvalidHostHeaderRequests (I can see these logs on the Cluster health dashboard of the Amazon Elasticsearch Service).
Found the problem.
When using elasticsearch library to connect to an AWS ES cluster, the previous syntax can lead to problems, so the best way to initialize the client is specifying the entire 'host' object as follows:
host: {
protocol: 'https',
host: 'search-xxx.us-west-1.es.amazonaws.com',
port: '443',
path: '/'
The problem here is that probably AWS ES Cluster expects the host field inside the host object and this leads to the "Invalid Host Header" issue. Hope this will help the community to write better code.
Refer to https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/16.x/host-reference.html for reference.
Jhipster Console
I have tried copying the MetricsConfiguration, LoggingConfiguration and JhipsterProperties among other files along with their dependencies.
I am at a complete loss, any ideas or insight would be appreciated.
jhipster:
security:
rememberMe:
# security key (this key should be unique for your application, and kept secret)
key: #placeholder
mail: # specific JHipster mail property, for standard properties see MailProperties
from: jhipster#localhost
baseUrl: http://127.0.0.1:8080
metrics: # DropWizard Metrics configuration, used by MetricsConfiguration
jmx.enabled: true
graphite:
enabled: false
host: localhost
port: 2003
prefix: jhipster
prometheus:
enabled: false
endpoint: /prometheusMetrics
logs: # Reports Dropwizard metrics in the logs
enabled: true
reportFrequency: 60 # in seconds
logging:
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: true
host: localhost
port: 5000
queueSize: 512
You simply need to copy the configuration I implemented in JHipster.
First, you have to setup the logstash-logback-encoder to report to logstash. Have a look at the github project. You can set this up either in logback.xml or in java code similar to what I did in LoggingConfiguration.java.
Then you have to setup dropwizard metrics to report metrics in the logs. Have a look at their documentation.
Finally you might have to edit the logstash.conf file with the grok rules so that it fits your logs that might not have the exact same format than the one you are using.
LoggingConfiguration uses 3 application properties: spring.application.name, server.port and eureka.instance.instanceId. You must make sure they are defined in your application-*.yml (or bootstrap-*.yml if your app is spring cloud app).