I am trying to load/write greenplum table via spark-connector, deployed in kubernates;
val gscWriteOptionMap = Map(
"url" -> "jdbc:postgresql://192.168.95.153:5432/xxx",
"user" -> "xxx",
"password" -> "xxxx",
"dbschema" -> "public",
"dbtable" -> "test_write",
"server.port" -> "12900",
"server.useHostname" -> "false",
"server.hostEnv" -> "env.NODE_IP"
)
df.write.format("greenplum").options(gscWriteOptionMap).save()
greenplum will start a gpfdist server in the spark executor in k8s cluster, having its own ip, which can't be reached by outside;
So, I make executor container to use hostPort:
ports:
- containerPort: 12900
hostPort: 12900
and configured spark-connector to use that port 12900, and it did use;
INFO GpfdistService: Successfully started Gpfdist service on 0.0.0.0:12900
Also, I configured it to use the node ip as server.hostEnv, (don't know will work or not); but it still use the pod ip for gpfdist;
ERROR: connection with gpfdist failed for "gpfdist://10.244.1.45:12900/spark_081bfb94f3667fe1_3d9d854163f8f07a_2_32"
10.244.1.45 is the pod ip;
I followed the docgreenplum spark-connector. Any help would be appreciated;
Related
my logstash is running on kubernetes, logstash attempting to connect to elasticsearch instead of loki as output even if i used
XPACK_MONITORING_ENABLED: false in the env.
logstash.yml: |
http.host: "0.0.0.0"
log.level: debug
xpack.monitoring.enabled: false
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
file{
path => "/var/log/containers/*.log"
}
}
filter {
kubernetes {
source => "path"
target => "loki"
}
}
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
}
}
console output
[2022-12-13T06:31:40,411][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:31:50,775][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:10,414][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:20,910][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:40,412][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:50,998][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:33:10,410][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:33:21,328][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
kubernetes config
image: "grafana/logstash-output-loki:1.0.1"
imagePullPolicy: "IfNotPresent"
command:
- '/bin/sh'
- '-c'
- 'logstash-plugin install --no-verify logstash-filter-kubernetes && logstash -f /usr/share/logstash/pipeline/logstash.conf'
I am not able to connect to the mongoDB Atlas cluster that I have made. I entered in the given line of code after I created the cluster and recieved the error:
I am not able to find any solution to this problem. Please help me.
MongoDB shell version v4.2.0
Enter password: Cannot get console mode 6
connecting to: mongodb://cluster0-shard-00-01-jigfx.mongodb.net:27017,cluster0-shard-00-02-jigfx.mongodb.net:27017,cluster0-shard-00-00-jigfx.mongodb.net:27017/test?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=Cluster0-shard-0&ssl=true
2019-09-03T17:07:19.299-0400 I NETWORK [js] Starting new replica set monitor for Cluster0-shard-0/cluster0-shard-00-01-jigfx.mongodb.net:27017,cluster0-shard-00-02-jigfx.mongodb.net:27017,cluster0-shard-00-00-jigfx.mongodb.net:27017
2019-09-03T17:07:19.300-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-01-jigfx.mongodb.net:27017
2019-09-03T17:07:19.300-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-02-jigfx.mongodb.net:27017
2019-09-03T17:07:19.300-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-00-jigfx.mongodb.net:27017
2019-09-03T17:07:20.099-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor]
Confirmed replica set for Cluster0-shard-0 is Cluster0-shard-0/cluster0-shard-00-00-jigfx.mongodb.net:27017,cluster0-shard-00-01-jigfx.mongodb.net:27017,cluster0-shard-00-02-jigfx.mongodb.net:27017
2019-09-03T17:07:20.719-0400 I NETWORK [js] Marking host cluster0-shard-00-00-jigfx.mongodb.net:27017 as failed :: caused by :: Location40659:can't connect to new replica set master [cluster0-shard-00-00-jigfx.mongodb.net:27017], err: AuthenticationFailed: Missing expected field "pwd"
*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.
2019-09-03T17:07:21.522-0400 E QUERY [js] Error: Missing expected field "pwd" :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2019-09-03T17:07:21.524-0400 F - [main] exception: connect failed
2019-09-03T17:07:21.524-0400 E - [main] exiting with code 1
The expected result is a prompt that asks me for the password to connect to the cluster, but the prompt instantly responds with Cannot get console mode 6
try adding --password **** to the end of command
I recently upgraded from Elasticsearch 1.5 to Elasticsearch 5.5, and I can connect fine remotely, but not from localhost.
I updated elasticsearch.yml to be able to connect remotely:
network.host: 0.0.0.0
And the elasticsearch logs look fine to me:
[2017-11-05T22:44:23,441][INFO ][o.e.n.Node ] [node1] starting ...
[2017-11-05T22:44:23,655][INFO ][o.e.t.TransportService ] [node1] publish_address {[ip address]:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}, {[ip address]:9300}
[2017-11-05T22:44:23,666][INFO ][o.e.b.BootstrapChecks ] [node1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-11-05T22:44:26,712][INFO ][o.e.c.s.ClusterService ] [node1] new_master {node1}{s-J7aStjQFuwor-WY6bSCQ}{nv4GVIQ6SwScPiebRHBQBQ}{localhost}{[ip address]:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-05T22:44:26,733][INFO ][o.e.h.n.Netty4HttpServerTransport] [node1] publish_address {[ip address]:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}, {[ip address]:9200}
[2017-11-05T22:44:26,734][INFO ][o.e.n.Node ] [node1] started
I am wondering if it is related to the proxy, since when I try run curl localhost:9200, I get the following:
...
<p>The following error was encountered while trying to retrieve the URL: http://localhost:9200/</p>
<blockquote id="error">
<p><b>Connection to 127.0.0.1 failed.</b></p>
</blockquote>
<p id="sysmsg">The system returned: <i>(111) Connection refused</i></p>
<p>The remote host or network may be down. Please try the request again.</p>
...
Any ideas or tips on how to narrow down the issue would be helpful.
It turned out a http_proxy bash variable was set which was proxying all outbound traffic. Once this variable was unset, the curl command worked.
It also turned out that before this fix was made, I realized our Java applications were still able to connect to Elasticsearch locally, it was just the curl command that wasn't working.
I am trying hadoop/spark cluster in Google Compute Engine through "Launch click-to-deploy software" feature .
I have created 1 master and 2 slave node and i can launch spark-shell on the cluster but when i want to launch spark-shell since my computer, i failed.
I launch :
./bin/spark-shell --master spark://IP or Hostname:7077
And i have this stackTrace :
15/04/09 10:58:06 INFO AppClient$ClientActor: Connecting to master
akka.tcp://sparkMaster#IP or Hostname:7077/user/Master...
15/04/09 10:58:06 WARN AppClient$ClientActor: Could not connect to
akka.tcp://sparkMaster#IP or Hostname:7077: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#IP or Hostname:7077
15/04/09 10:58:06 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#IP or Hostname:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: IP or Hostname: unknown error
please let me know how to overcome this problem .
See comment from Daniel Darabos. By default, all incoming connections are blocked except for SSH, RDP and ICMP. To be able to connect from the Internet to the hadoop master instance, you must open port 7077 for 'hadoop-master' tag in your project first:
gcloud compute --project PROJECT firewall-rules create allow-spark \
--allow TCP:7077 \
--target-tags hadoop-master
See Firewalls, Adding a firewall and gcloud compute firewall-rules create at GCE public documentation for further details and all the possibilities.
I am trying to configure two windows servers in my network as Cassandra cluster.
I did some reading in various sites and changed the below in Cassandra.yalm
after changing the default value of 127.0.0.1 to actual IP the Cassandra service is not starting.
I also added the map to actual IP to localhost in (windows) hosts file.
After doing the above change, the service is coming up when I start the service. it is stopping immediately.
The reason I am changing this IP is to make this a cluster with two node setup,
Please let me know if I miss some thing.
Version: Datastax community version of Cassandra
Server : windows.
Thx
Muthu
Message from Cassandra.txt in logs dir:
ERROR [main] 2014-09-18 11:43:12,155 DatabaseDescriptor.java (line 116) Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml Caused by: Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=seed_provider for JavaBean=org.apache.cassandra.config.Config#34e5190a; No suitable constructor with 2 arguments found for class org.apache.cassandra.config.SeedProviderDef in 'reader', line 8, column 1: cluster_name: 'Test Cluster'
If you want to create Cassandra cluster you must have at least two nodes and configure /etc/cassandra/cassandra.yaml
cassandra.yaml
cluster_name: 'Some Cluster Name'
listen_address: [Current IP]
rpc_address: [Current IP]
seed_provuder:
- seeds: "[Current IP], [Remote IP]"
Note: seeds must have at least two IPs which must be reachable for each other
Clean and start Cassandra instance
sudo rm -rf /var/lib/cassandra/* /var/log/cassandra/*
Note: Cassandra instance must be killed before cleaning those folders.