JHipster elasticsearch configuration for multiple cluster nodes - jhipster

I am developing a jhipster project with elasticsearch. As Using Elasticsearch page describes, I used the spring.data.jest.uri property for production use as follows:
spring:
......... # omitted to keep short
data:
mongodb:
uri: mongodb://localhost:27017
database: PROJ1
jest:
uri: http://172.20.100.2:9200
What I want is, give more than one uri for elasticsearch because I have set up a 3-node cluster. If one node goes down, another alive node should be used. Is such a configuration possible and if possible how sould I do it?

Related

Micronaut fail to connect to Keyspaces

I'm trying to integrate my service with AWS Cassandra (Keyspaces) with the following config:
cassandra:
default:
advanced:
ssl: true
ssl-engine-factory: DefaultSslEngineFactory
metadata:
schema:
enabled: false
auth-provider:
class: PlainTextAuthProvider
username: "XXXXXX"
password: "XXXXXX"
basic:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
load-balancing-policy:
local-datacenter: "${CASSANDRA_DATA_CENTER}:datacenter1"
session-keyspace: "keyspace"
Whenever I'm running the service it fails to load with the following error:
Message: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142, hostId=null, hashCode=7296b27b): [com.datastax.oss.driver.api.core.DriverTimeoutException: [s0|control|id: 0x1f1c50a1, L:/172.17.0.3:54802 - R:cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142] Protocol initialization request, step 1 (OPTIONS): timed out after 5000 ms]
There's very little documentation about the cassandra-micronaut library, so I'm not sure what I'm doing wrong here.
UPDATE:
For clarity: the values of our environment variables are as follow:
export CASSANDRA_HOST=cassandra.eu-west-1.amazonaws.com
export CASSANDRA_PORT=9142
export CASSANDRA_DATA_CENTER=eu-west-1
Note that even when I've hard-coded the values into my application.yml the problem continued.
I think you need to adjust your variables in this example. The common syntax for Apache Cassandra or Amazon Keyspaces is host:port. For Amazon Keyspaces the port is always 9142.
Try the following:
contact-points:
- ${CASSANDRA_HOST}:${CASSANDRA_PORT}
or simply hard code them at first.
contact-points:
- cassandra.eu-west-1.amazonaws.com:9142
So this:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
Doesn't match up with this:
Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142,
Double-check which IP(s) and port Cassandra is broadcasting on (usually seen with nodetool status) and adjust the service to not look for it on 127.0.0.1.

Routing issue in neo4j 4.0 with multiple databases

I have created a neo4j and graphql application with neo4j 4.0. In my application, I used two neo4j databases. These instances run in a docker container on my PC. But When I tried to run a query using graphql playground, graphql server gives the following error.
"Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1592037819743, routers=[], readers=[], writers=[]]"
I created neo4j driver instance and session instance as following
const driver = neo4j.driver(
process.env.NEO4J_URI || "neo4j://localhost:7687",
neo4j.auth.basic(
process.env.NEO4J_USER,
process.env.NEO4J_PASSWORD
)
);
const session = driver.session(
{
database: 'mydb',
}
)
I couldn't find any way to fix this issue. Can someone help me to fix this? thank you.
If you use single server please use bolt:// as protocol. The it will not ask the server for routing tables

AWS ECS error in prisma container - environment variable PRISMA_CONFIG

I'm new to AWS, and I'm trying to deploy my local web app on AWS using ECR and ECS, but got stuck when running a cluster, it throws the error about the PRISMA_CONFIG environment variable in prisma container.
In my local environment, i'm using docker to build the app using nodejs, prisma and mongodb, it's working fine.
Now on ECS, i created a task definition and for prisma container, i tried to copy the yml config from my local docker-compose.yml file to make it work.
There is field called "ENVIRONMENT", I've inputted the value in the Environment variables, it's just not working and throw the error while the cluster was running, then the task Stopped.
the yml is in multiple lines, but the input box supports string only
the variable key is PRISMA_CONFIG
and the following are the values that i've already tried
| port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
| \nport: 4466 \ndatabases: \ndefault: \nconnector: mongo \nuri: mongodb://prisma:prisma#mongo
|\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
and the errors
Exception in thread "main" java.lang.RuntimeException: Unable to load Prisma config: java.lang.RuntimeException: No valid Prisma config could be loaded.
expected a comment or a line break, but found p(112)
expected chomping or indentation indicators, but found \(92)
i expected that all containers will run without errors, but actual results are the container stopped after running for a minute.
Please help for this.
or suggest other way to deploy to AWS?
THANK YOU VERY MUCH.
I've been looking for a similar solution to load the prisma config without the multiline string.
There are repositories that load the prisma environment variables separately without a prisma config:
Check out this repo for example:
https://github.com/akoenig/prisma-docker-compose/blob/master/.prisma.env
Here akoenig uses the following env variables using a env_file. So, I'm assuming you can just pass in these environment variables separately to achieve what prisma is looking for.
# CONTENTS OF env_file
PORT=4466
SQL_CLIENT_HOST_CLIENT1=database
SQL_CLIENT_HOST_READONLY_CLIENT1=database
SQL_CLIENT_HOST=database
SQL_CLIENT_PORT=3306
SQL_CLIENT_USER=root
SQL_CLIENT_PASSWORD=prisma
SQL_CLIENT_CONNECTION_LIMIT=10
SQL_INTERNAL_HOST=database
SQL_INTERNAL_PORT=3306
SQL_INTERNAL_USER=root
SQL_INTERNAL_PASSWORD=prisma
SQL_INTERNAL_DATABASE=graphcool
CLUSTER_ADDRESS=http://prisma:4466
SQL_INTERNAL_CONNECTION_LIMIT=10
SCHEMA_MANAGER_SECRET=graphcool
SCHEMA_MANAGER_ENDPOINT=http://prisma:4466/cluster/schema
#CLUSTER_PUBLIC_KEY=
BUGSNAG_API_KEY=""
ENABLE_METRICS=0
JAVA_OPTS=-Xmx1G
This is for a mySQL database. You would need to tailor this to suit your values. But in theory you should just be able to pass these variables one by one into single variables in AWS's GUI.
I've also asked this question on the Prisma Slack channel and am waiting to see if they have other suggestions: https://prisma.slack.com/archives/CA491RJH0/p1569689413383000
Let me know how it goes.
Not and expert here but, have you set up an environment variable PRISMA_API_MANAGEMENT_SECRET you would have defined the secret when you configured your fargate instance.
have a look at the following artical
https://www.prisma.io/tutorials/deploy-prisma-to-aws-fargate-ct14

Invalid Host Header when using elasticsearch client

When using the elasticsearch client (from the elasticsearch npm version 15.4.1), the AWS elasticsearch service complains about an Invalid Host Header. This happens for every request even though they work.
I double-checked the configuration for initializing the elasticsearch client and the parameter "host" is correctly formed.
let test = require('elasticsearch').Client({
host: 'search-xxx.us-west-1.es.amazonaws.com',
connectionClass: require('http-aws-es')
});
I expected to get a clean ElasticsearchRequest without a corresponding InvalidHostHeaderRequests (I can see these logs on the Cluster health dashboard of the Amazon Elasticsearch Service).
Found the problem.
When using elasticsearch library to connect to an AWS ES cluster, the previous syntax can lead to problems, so the best way to initialize the client is specifying the entire 'host' object as follows:
host: {
protocol: 'https',
host: 'search-xxx.us-west-1.es.amazonaws.com',
port: '443',
path: '/'
The problem here is that probably AWS ES Cluster expects the host field inside the host object and this leads to the "Invalid Host Header" issue. Hope this will help the community to write better code.
Refer to https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/16.x/host-reference.html for reference.

Hazelcast Eureka Cloud Discovery Plugin not working

We have implemented Hazelcast as an embedded cache in our Spring Boot app, and need a way using which Hazelcast members within a "cluster group" can discover each other dynamically so that we dont have to provide possible IP address/port where Hazelcast might be running.
We came across this hazelcast plugin on github:
https://github.com/hazelcast/hazelcast-eureka which seems to provide the same feature using Eureka as discovery/registration tool.
As mentioned in this github documentation, hazelcast-eureka-one library is included within our boot app classpath, we also disabled TCP-IP & multicast discovery and added below discovery strategy in hazelcast.xml:
<discovery-strategies>
<discovery-strategy class="com.hazelcast.eureka.one.EurekaOneDiscoveryStrategy" enabled="true">
<properties>
<property name="self-registration">true</property>
<property name="namespace">hazelcast</property>
</properties>
</discovery-strategy>
</discovery-strategies>
Our application also provides configured EurekaClient, which is what we are autowiring and inject into this plugin implementation:
*
Config hazelcastConfig = new FileSystemXmlConfig(hazelcastConfigFilePath);
**EurekaOneDiscoveryStrategyFactory.setEurekaClient(eurekaClient);**
hazelcastInstance = Hazelcast.newHazelcastInstance(hazelcastConfig);
*
Problem:
We are able to start 2 instances of our spring boot app on same machine and we notice that each app is starting hazelcast instance embedded on separate port (5701, 5702). But it doesnt seem to recognize each other running within a cluster, this is what we see in app logs when 2nd instance is starting:
Members [1] {
Member [10.41.70.143]:5702 - 7c42eb24-3fa0-45cb-9394-17175cc92b9c this
}
17-12-13 12:22:44.480 WARN [main] c.h.i.Node.log(LoggingServiceImpl.java:168) - [10.41.70.143]:5702 [domain-services] [3.8.2] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
which seem to indicate that both hazelcast instances are running independently and doesnt recognize other running instance in a cluster/group.
Also, immediately after restart we see this exception thrown frequently on both the nodes:
*
java.lang.ClassCastException: com.hazelcast.nio.tcp.MemberWriteHandler cannot be cast to com.hazelcast.nio.ascii.TextWriteHandler
at com.hazelcast.nio.ascii.TextReadHandler.<init>(TextReadHandler.java:109) ~[hazelcast-3.8.2.jar:3.8.2]
at com.hazelcast.nio.tcp.SocketReaderInitializerImpl.init(SocketReaderInitializerImpl.java:89) ~[hazelcast-3.8.2.jar:3.8.2]
*
which seem to indicate there is Incompatibility between hazelcast library in the classpath?
It seems like your Eureka service returns the wrong ports. Hazelcast tries to connect to 8080 and other ports in the same range, whereas Hazelcast uses 5701. Not exactly sure why this happens but it feels like you requesting the wrong service name from Eureka which ends up returning the HTTP (Tomcat?!) ports instead of the separate Hazelcast service that should be registered.

Resources