connecting socketcluster servers - node.js

I'm trying to implement this solution (on Win10 x64), but for some reason all the SocketCluster nodes refuse to communicate with each other.
Sothis is my cur. configuration:
1 StateServer [7777]
1 BrokerServer [8888]
2 SocketCluster servers running on ports [ 8000, 8001]
1 LoadBalancer [2000] to divide the trafic between the 2 nodes.
I ensured that both the State and Broker severs are listening:
TCP [::]:7777 [::]:0 LISTENING
TCP [::]:8888 [::]:0 LISTENING
From what I've understood so far, BrokerServer along with the SocketCluster nodes should all connect to the StatusServer(?)
I could successfully connect the BrokerServer to StateServer, but whenever I try to connect any of the SocketCluster services, it reports 'socket hung' errors.
StateServer:
SC Cluster State Server is listening on port 7777
Sever d08298c6-523f-4c1b-9fcc-efd4e92fab22 at address undefined on port 8888 joined the cluster
Client 10612bde-514f-40d3-9340-7179a1901376 at address undefined joined the cluster
Cluster state converged to active:["ws://[undefined]:8888"]
SocketCluster instance:
{ SocketProtocolError: Socket hung up
at Emitter.SCSocket._onSCClose (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\scsocket.js:596:15)
at Emitter.<anonymous> (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\scsocket.js:285:12)
at Emitter.emit (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\component-emitter\index.js:131:20)
at Emitter.SCEmitter.emit (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\sc-emitter\index.js:28:26)
at Emitter.SCTransport._onClose (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\sctransport.js:175:30)
at WebSocket.wsSocket.onerror (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\sctransport.js:104:12)
at WebSocket.onError (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\ws\lib\WebSocket.js:452:14)
at emitOne (events.js:96:13)
at WebSocket.emit (events.js:188:7)
at WebSocket.EventEmitter.emit (C:\Users\Alex\AppData\Roaming\npm\node_modules\socketcluster\node_modules\sc-domain\index.js:12:31)
name: 'SocketProtocolError',
message: 'Socket hung up',
code: 1006 }

Are you running those instances in Docker containers by any chance?
Based on the log output that you're getting from the state server (address undefined), it looks like the scc-state instance cannot figure out your instances' IP addresses. This can happen for several reasons. For example, running an instance inside a Docker container can obscure that instance's real IP address. It's also possible that running SCC on Windows could cause similar problems.
The solution to this problem is to set an SCC_INSTANCE_IP environment variable when launching each instance - This environment variable should hold the IP address of the instance which other instances can use to connect to it (if using Docker, you can use the docker inspect command to find the private network IP address of a specific container).
SCC_INSTANCE_IP can be either a private IP address, public IP address or a hostname.

It turned out, that scaling the cluster horizontally isn't working properly on Windows OS yet (using the current version v.1.2.1).
Both SocketCluster nodes aren't communicating with the brokerServer for some reason.

Related

Pyads connection refused with Beckhoff running Twincat 3

I am trying to make a connection from a server running Ubuntu to a Beckhoff PLC with TwinCAT 3. With Windows everything works fine but with the same server on Linux I can't get a connection.
The Linux server has a static IP and in the route manager in the PLC I can find the route and see the server. I have tried adding the route by the route manager in the PLC and with "add_route_to_plc" but both ways my connection is refused. I have already turned off all firewalls. Any of you guys any idea what goes wrong here? In the attachment I have added some picture to see my settings and code that I try to run.
Python error: "connection closed by remote"
Python code:
import pyads
SENDER_AMS = '192.168.1.180.1.1'
PLC_IP = '192.168.1.100'
PLC_USERNAME = 'Administrator'
PLC_PASSWORD = '1'
ROUTE_NAME = 'GID_TEST_ROUTE'
HOSTNAME = 'Grid-stabilizer'
pyads.open_port()
pyads.set_local_address(SENDER_AMS)
pyads.add_route_to_plc(SENDER_AMS, HOSTNAME, PLC_IP, PLC_USERNAME, PLC_PASSWORD, route_name=ROUTE_NAME)
pyads.close_port()
plc=pyads.Connection('192.168.1.100.1.1', pyads.PORT_TC3PLC1)
plc.open()
plc.read_state()
If you are running python on linux and the plc on windows try
plc=pyads.Connection('192.168.1.100.1.1', pyads.PORT_TC3PLC1, PLC_IP)
This will create a route on the linux system. In your code the ip is missing to create a proper route.
Check the port of your plc. It should be 851.

How to run a http server on EMR master node of a Spark application

I have a Spark streaming application (Spark 2.4.4) running on AWS EMR 5.28.0. In the driver application on master node, besides setting up the spark streaming job, I am also running a http server (Akka-http 10.1.6) which can query the driver application for data, I bind to port 6161 like the following:
val bindingFuture: Future[ServerBinding] = Http().bindAndHandle(myapiroutes, "127.0.0.1", 6161)
try {
bindingFuture.map { serverBinding =>
log.info(s"AlertRestApi bound to ${serverBinding.localAddress}")
}
} catch {
case ex: Exception => {
log.error(s"Failed to bind to 127.0.0:6161")
system.terminate()
}
}
then I start spark streaming:
ssc.start()
When I test this on local spark, I am able to access http://localhost:6161/myapp/v1/data and get data from spark streaming, everything is good so far.
However, when I run this application in AWS EMR, I could not access port 6161. I ssh into the driver node and try to curl my url, it gives me error message:
[hadoop#ip-xxx-xx-xx-x ~]$ curl http://xxx.xx.xx.x:6161/myapp/v1/data
curl: (7) Failed to connect to xxx.xx.xx.x port 6161: Connection refused
when I look into the log in the driver node, I do see the port is bound (why the host shows 0:0:0:0:0:0:0:0? I don't know, that is the way in my dev testing, and it works, I see the same log and able to access the url):
20/04/13 16:53:26 INFO MyApp: MyRestApi bound to /0:0:0:0:0:0:0:0:6161
So my question is, what should I do so that I can access the api at port 6161 on the driver node? I realize Yarn resource manager may be involved but I know nothing about Yarn resource manager to point myself where to investigate.
Please help. Thanks
You are mentioning 127.0.0.1 as the host name or 0.0.0.0??
127.0.0.1 will work in your local system but not in AWS as it is loopback address. In such case you need to use 0.0.0.0 as the host name
Also make sure that ports are open and access is provided from your IP. To do that, go to Inbound rules for your instance and add 6161 under custom TCP rule if not done already.
Let me know if this makes any difference

Host resolution error while using node-rdkafka

I'm running node-rdkafka as a Node.js application. The consumer hangs indefinitely without pulling any messages from kafka (works on localhost).
Emits the below error,
{ Error: Local: Host resolution failure
origin: 'local',
message: 'host resolution failure',
code: -1,
errno: -1,
stack: 'Error: Local: Host resolution failure' }
The application works to the point of receiving data from kafka. The kafka instance is fine, validated by producing and consuming messages using the console.
Any help with debugging why this is occurring is much appreciated.
Sample consumer code here - https://github.com/Blizzard/node-rdkafka/blob/master/examples/consumer-flow.md
This issue happens due to the different networks of your client and broker.
The simple hack is to make host entry of advertised.listeners
For example,
advertised.listeners=PLAINTEXT://kafka:9092
Then add an entry in /etc/hosts with your kafka-broker-IP. For e.g. kafka-borker-IP is 192.168.1.1
192.168.1.1 kafka
You can use kafkacat utility to check your broker's IP.
kafkacat -b kafka:9092 -L
It will return metadata about the brokers.
You need to check that returned broker's IP is reachable or not from your machine.
For a better understanding of this issue.
You can refer https://www.confluent.io/blog/kafka-listeners-explained/
I had this exact problem when running kafka locally using the quick start instructions from https://kafka.apache.org/quickstart
For me, adding the following two lines to config/server.properties before starting kafka-server has solved the issue -
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092

'The requested address is not valid in its context' while trying to connect to ArangoDB server on LAN

I have two machines on LAN, I'd like to connect to AranoDB serve on one of them from another one.
The first one has an address 192.168.0.105, arangod.conf
[server]
endpoint = tcp://0.0.0.0:8529
storage-engine = auto
another one has an address 192.168.0.100 and arangod.conf
[server]
endpoint = tcp://192.168.0.105:8529
storage-engine = auto
ArangoDB on the first machine is working. When I try to start ArangoDB on the second machine, I see the following error:
2018-08-21T09:46:15Z [2724] INFO {authentication} Jwt secret not specified, generating...
2018-08-21T09:46:15Z [2724] INFO ArangoDB 3.3.12 [win64] 64bit, using build tags/v3.3.12-0-g225095d762, VPack 0.1.30, RocksDB 5.6.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.0.2a 19 Mar 2015
2018-08-21T09:46:15Z [2724] INFO using storage engine mmfiles
2018-08-21T09:46:15Z [2724] INFO {cluster} Starting up with role SINGLE
2018-08-21T09:46:15Z [2724] INFO {authentication} Authentication is turned on (system only)
2018-08-21T09:46:18Z [2724] INFO using endpoint 'http+tcp://192.168.0.105:8529' for non-encrypted requests
2018-08-21T09:46:18Z [2724] ERROR {communication} unable to bind to endpoint 'http+tcp://192.168.0.105:8529': The requested address is not valid in its context
2018-08-21T09:46:18Z [2724] WARNING {communication} failed to open endpoint 'http+tcp://192.168.0.105:8529' with error: The requested address is not valid in its context
2018-08-21T09:46:18Z [2724] FATAL failed to bind to endpoint 'http+tcp://192.168.0.105:8529'. Please check whether another instance is already running using this endpoint and review your endpoints configuration.
I've already created rules in the windows firewall and in the router.
Test-NetConnection results are:
PS C:\Users\> Test-NetConnection -ComputerName 192.168.0.105 -Port 8529
ComputerName : 192.168.0.105
RemoteAddress : 192.168.0.105
RemotePort : 8529
SourceAddress : 192.168.0.100
TcpTestSucceeded : True
What else should I do?
Not sure what you try here... connect with one server to another server? This is bound to fail. Don't you want to run a server on one machine and connect to it from another computer on the local network using arangosh? Or simply use the web interface?
The endpoint must be an address used by a network interface of your local computer. It can't be the address of another machine.
Setups like clusters require a lot more configuration (if done bare-metal).
For an overview of deployment modes including multi-machine setups you may want to check the work-in-progress documentation: https://docs.arangodb.com/devel/Manual/Deployment/

DC/OS 1.9 VIP load balancing not working for advertised ports

When I publish a service with a VIP, the advertised address does not route properly to the advertised port. For example, for a MariaDB Galera 3-node cluster service with a VIP specified as:
"labels": {
"VIP_0": "/mariadb-galera:3306"
}
On the configuration tab of the service page (and according to the docs), the load balanced address is:
mariadb-galera.marathon.l4lb.thisdcos.directory:3306
I can ping the DNS name just fine, but...
When I try to connect a front-end service (Drupal7, wordpress) to consume this load balanced address:port combination, there will be numerous connection failures and timeouts. It isn't that it never works but that it works quite sporadically, if at all. Drupal7 dies almost immediately and starts kicking up Bad Gateway errors.
What I have found through experimentation is that if I specify a hostPort for the service in question, the load balanced address will work as long as I use the hostPort value, and not the advertised load balanced service port as above. In this specific case I specified a hostPort of 3310.
"network":"USER",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 3310,
"servicePort": 10000,
"name": "mariadb-galera",
"labels": {
"VIP_0": "/mariadb-galera:3306"
}
}
Then if I use the load balanced address (mariadb-galera.marathon.l4lb.thisdcos.directory) with the host port value (3310) in my Drupal7 settings.php, the front end connects and works fine.
I've noticed similar behaviour with custom applications connecting to mongodb backends also in a DC/OS environment... it seems the load balanced address/port combination specified never works reliably... but if you substitute the hostPort value, it does.
The docs clearly state that:
address and port is load balanced as a pair rather than individually.
(from https://docs.mesosphere.com/1.9/networking/dns-overview/)
Yet I am unable to effectively connect when I specify the VIP designated port. Yet IT DOES WORK when I use the hostPort (and will not work at all unless I designate a specific hostPort in the service definition json). Wether or not this approach is actually load balanced remains a question to me based on the wording in the documentation.
I must be doing something wrong, but I am at a loss... any help is appreciated.
My cluster nodes are VMWare virtual machines.
The VIP label shouldn't start with a slash:
"container": {
"portMappings": [
{
"containerPort": 3306,
"name": "mariadb-galera",
"labels": {
"VIP_0": "mariadb-galera:3306"
}
}
}
should be available as <VIP label>.marathon.l4lb.thisdcos.directory:<VIP port> in this case:
mariadb-galera.marathon.l4lb.thisdcos.directory:3306
you can test it using nc:
nc -z -w5 mariadb-galera.marathon.l4lb.thisdcos.directory 3306; echo $?
The command should return 0.
When you're not sure about exported DNS names you can list all of them from any DC/OS node:
curl -s http://198.51.100.1:63053/v1/records | grep mariadb-galera

Resources