Infinispan Hotrod remote server exception - distributed-caching

I am trying to use Infinispan remote cache on the remote server. I am using
Java application to connect to the server and store objects in the memory.
When I ran this application for the local hotrod server using loopback address
(127.0.0.1) it works. However, when I try to use it for remote server it fails.
Here is the code snippet:
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
public class HotRodRemoteClient {
public void start() {
RemoteCacheManager manager = new RemoteCacheManager("10.100.9.28");
RemoteCache<Integer, Ticket> cache = manager.getCache();
}
}
Here is the exception:
ISPN004017: Could not fetch transport
org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: /10.100.9.28:11222
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:90)
at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:57)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:254)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:145)
at org.infinispan.client.hotrod.impl.operations.FaultTolerantPingOperation.getTransport(FaultTolerantPingOperation.java:44)
at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:67)
at org.infinispan.client.hotrod.impl.RemoteCacheImpl.ping(RemoteCacheImpl.java:432)
at org.infinispan.client.hotrod.RemoteCacheManager.ping(RemoteCacheManager.java:538)
at org.infinispan.client.hotrod.RemoteCacheManager.createRemoteCache(RemoteCacheManager.java:520)
at org.infinispan.client.hotrod.RemoteCacheManager.getCache(RemoteCacheManager.java:452)
at org.infinispan.client.hotrod.RemoteCacheManager.getCache(RemoteCacheManager.java:447)
at com.packtpub.infinispan.chapter2.HotRodRemoteClient.start(HotRodRemoteClient.java:17)
at com.packtpub.infinispan.chapter2.HotRodRemoteClient.main(HotRodRemoteClient.java:65)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:81)
... 13 more
I can ping the server 10.100.9.28:
shell> ping 10.100.9.28
PING 10.100.9.28 (10.100.9.28): 56 data bytes
64 bytes from 10.100.9.28: icmp_seq=0 ttl=64 time=0.261 ms
64 bytes from 10.100.9.28: icmp_seq=1 ttl=64 time=0.184 ms
64 bytes from 10.100.9.28: icmp_seq=2 ttl=64 time=0.290 ms
64 bytes from 10.100.9.28: icmp_seq=3 ttl=64 time=0.285 ms
I use Infinispan 5.1.6, Maven 3.04, JDK6.33.
My hotrod server runs on Centos 5.5, but Java application is on Mac 10.7
Firewalls on both machines are disabled.
I suspect this is configuration problem. What should I change to make it to work?
Thank you,
Jacob Nikom

Review in the configuration file standalone.xml than the socket binding for the hotrod has no interface or one that applies.
Default standalone.xml:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/>
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/>
<socket-binding name="ajp" port="8009"/>
<socket-binding name="hotrod" interface="management" port="11222"/>
<socket-binding name="http" port="8080"/>
<socket-binding name="https" port="8443"/>
<socket-binding name="memcached" interface="management" port="11211"/>
<socket-binding name="remoting" port="4447"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
</socket-binding-group>
Try changing
<socket-binding name="memcached" interface="management" port="11211"/>
for
<socket-binding name="memcached" port="11211"/>

You should add the port in constructor:
RemoteCacheManager manager = new RemoteCacheManager("10.100.9.28", port);

Properties props = new Properties();
props.put("infinispan.client.hotrod.server_list", "127.0.0.1:11222");
RemoteCacheManager manager = new RemoteCacheManager(props);
RemoteCache defaultCache = manager.getCache();
cited from: https://access.redhat.com/site/documentation/en-US/Red_Hat_JBoss_Data_Grid/6.1/html/Getting_Started_Guide/Create_a_New_RemoteCacheManager.html
Hope this helps!

Related

".local" suffix on local server (Manjaro)

I recently changed my Linux distribution (Manjaro) and my local domains defined in /etc/hosts and ending in .local no longer work.
# /etc/hosts
# Host addresses
127.0.0.1 localhost
127.0.0.1 hello.local
127.0.0.1 hello.domain.local
127.0.0.1 hello.other
$ ping hello.local
ping: hello.local: Nom ou service inconnu
$ ping hello.other
PING hello.other (127.0.0.1) 56(84) octets de données.
64 octets de localhost (127.0.0.1) : icmp_seq=1 ttl=64 temps=0.012 ms
...
$ ping hello.domain.local
PING hello.domain.local (127.0.0.1) 56(84) octets de données.
64 octets de localhost (127.0.0.1) : icmp_seq=1 ttl=64 temps=0.012 ms
...
This doesn't work only with a sub-domain of the tld .local : hello.local.
If it's a sub-sub domain : hello.domain.local , it works.
Also if the tld is not (.local) but .other : hello.other, it works.
where did I miss something?
Firstly, backup your nsswitch.conf ->
sudo cp /etc/nsswitch.conf /etc/nsswitch.conf.bk
Then, goo to /etc/nsswitch.conf and
replace this line
hosts: mymachines mdns4_minimal [NOTFOUND=return] resolve [!UNAVAIL=return] files myhostname dns
to
hosts: files mdns4_minimal [NOTFOUND=return] dns

On Ubuntu 18.04, i cannot access Tomcat from a browser using IP address

I've installed Tomcat 9 on Ubuntu 18.04(VM). I cannot access tomcat using IP address from a browser (or curl)
On the VM, tomcat is running and curl http://1.2.3.4:8080 works.
But the same externally does not..
l-OSX: hal$ curl https://10.51.253.163:8080 -v
* Rebuilt URL to: https://10.51.253.163:8080/
* Trying 10.51.253.163...
* connect to 10.51.253.163 port 8080 failed: Operation timed out
* Failed to connect to 10.51.253.163 port 8080: Operation timed out
Tomcat's server.xml
<Engine name="Catalina" defaultHost="10.51.253.163">
...
<Host name="10.51.253.163" appBase="webapps"
unpackWARs="true" autoDeploy="true">
UFW is Inactive
sudo ufw status verbose`
Status: inactive`
Ping to the VM works
l-OSX: hal$ ping 10.51.253.163
PING 10.51.253.163 (10.51.253.163): 56 data bytes
64 bytes from 10.51.253.163: icmp_seq=0 ttl=58 time=111.914 ms
64 bytes from 10.51.253.163: icmp_seq=1 ttl=58 time=93.793 ms
Appreciate any help on this!
After some research and help from IT Support team, i was able to resolve this as below:
VM > Manage Security
Add Security Rule
Allow Port: 8080 on Protocol: TCP
Able to access Tomcat from browser.

How to enable outbound connections for a Docker container?

I have an ASP.NET Core application that is hosted in the Docker cloud (cloud provider is Azure). The application uses Hangfire to run recurring jobs in the background, and one of the jobs needs to request data from an external REST API. I noticed that any attempt at outbound communication fails, and I would like to know, how I can enable it.
The deployment consists of some other containers, whereby linked containers (services) can communicate with no problem. There is no special network configuration; the default "bridge" mode is used. Do I need to configure something in the container´s image, or do I need to make changes to the network settings... I have no clue.
There is no special network configuration; the default "bridge" mode
is used.
According to your description, it seems you are using a VM and run docker on it.
If you want to access this docker from Internet, we should map docker port to local port, for example:
docker run -d -p 80:80 my_image service nginx start
After we map port 80 to this VM, we should add inbound rules to Azure network security group(NSG), we can follow this article to add it.
Also we should add port 80 to OS filewall inbound rules.
Update:
Sorry for misunderstand.
Here is my test, I install docker on Azure VM(Ubuntu 16), then create a centos docker, like this:
root#jasonvms:~# docker run -i -t centos bash
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
d9aaf4d82f24: Pull complete
Digest: sha256:4565fe2dd7f4770e825d4bd9c761a81b26e49cc9e3c9631c58cfc3188be9505a
Status: Downloaded newer image for centos:latest
[root#75f92bf5b499 /]# ping www.google.com
PING www.google.com (172.217.3.100) 56(84) bytes of data.
64 bytes from lga34s18-in-f4.1e100.net (172.217.3.100): icmp_seq=1 ttl=47 time=7.93 ms
64 bytes from lga34s18-in-f4.1e100.net (172.217.3.100): icmp_seq=2 ttl=47 time=8.13 ms
64 bytes from lga34s18-in-f4.1e100.net (172.217.3.100): icmp_seq=3 ttl=47 time=8.15 ms
^C
--- www.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 7.939/8.076/8.153/0.121 ms
[root#75f92bf5b499 /]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=1.88 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=1.89 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=51 time=1.86 ms
c64 bytes from 8.8.8.8: icmp_seq=4 ttl=51 time=1.87 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=51 time=1.78 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=51 time=1.87 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5009ms
rtt min/avg/max/mdev = 1.783/1.861/1.894/0.061 ms
[root#75f92bf5b499 /]#
I find it can community with the internet, could you please show me more information about your issue?
if you are using standalone instance then make changes in network_security group of instance and allow outbound rules,
if using ACS follow below link
https://learn.microsoft.com/en-us/azure/container-service/dcos-swarm/container-service-enable-public-access

Pinging local domain returns unknown IPv6

I have a weird problem I can't solve due to my lack of knowledge.
I have a local server running Dnsmasq. On my computer (Windows 10) I have Acrylic DNS Proxy which directs all requests ending with .local to the local server. It works great, however one domain respons with an unknown IPv6 address.
> ping testdomain.local
Pinging TESTDOMAIN [IPv6 address] with 32 bytes of data:
Reply from IPv6 address: time<1ms
I can't figure out why testdomain.local is reversed to TESTDOMAIN. All my other local domains respons as expected:
> ping testdomain2.local
Pinging testdomain2.local [local server address] with 32 bytes of data:
Reply from local server address: bytes=32 time<1ms TTL=64

Why does Node.js/Express not accept connections from localhost?

I encountered this strange behavior today I could not find a cause for. I am using MacOS Sierra.
I have this code (Express):
app.server.listen(config.port, config.address, function () {
logger.info('app is listening on', config.address + ':' + config.port);
});
And it prints
app is listening on 127.0.0.1:5000
How ever, if I try to curl, it fails.
$ curl http://localhost:5000/api/ping
curl: (56) Recv failure: Connection reset by peer
I checked my hosts file:
$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
So I ping localhost to make sure it resolves to 127.0.0.1:
$ ping localhost
PING localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.061 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.135 ms
^C
--- localhost ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.061/0.107/0.135/0.033 ms
I try again, but it fails
$ curl http://localhost:5000/api/ping
curl: (56) Recv failure: Connection reset by peer
Now I try to use 127.0.0.1 instead and voila, it works?
$ curl http://127.0.0.1:5000/api/ping
pong
What's wrong?
cURL is trying to connect via IPv6 but your Express server is listening on 127.0.0.1 which is IPv4.
You can force cURL to connect via IPv4 with the -4 option.
curl -4 http://127.0.0.1:5000/api/ping

Resources