when do we get PORTMAP procedure GETADDR and GETPORT - rpc

Can someone comment on how and at which level in portmap program version is decided.
Actually in 2 different environments I am getting different Procedure and program version.
in case 1 : I am getting Program version as 2 and procedure GETPORT
in case 2 : I am getting Program version as 4 and procedure GETADDR.
How can I ensure that in every case GETPORT is called not GETADDR.

The small confusion comes from the fact that you use probably the same client that talk to two 'different' services to portmap, exposed as program with #100000, version 2 and rpcbind service, exposed as program #100000, version 4. To match the correct service the combination of program and the version is used. Thus, bough can share the same TPC/UDP port, but provide different services.
portmap
Returns TCP/UDP port number of registered program, like:
GETPORT {'program': '100003', 'version': 4} => 2049
rpcbind
Returns the universal addresses of registered program, like:
GETADDR {'program': '100003', 'version': 4, 'netid': 'tcp'} => 0.0.0.0.8.1
Typical rpcbind service support the portmap protocol as well.

Related

Spring Data GemFire Server java.net.BindException in Linux

I have a Spring Boot app that I am using to start a Pivotal GemFire CacheServer.
When I jar up the file and run it locally:
java -jar gemfire-server-0.0.1-SNAPSHOT.jar
It runs fine without issue. The server is using the default properties
spring.data.gemfire.cache.log-level=info
spring.data.gemfire.locators=localhost[10334]
spring.data.gemfire.cache.server.port=40404
spring.data.gemfire.name=CacheServer
spring.data.gemfire.cache.server.bind-address=localhost
spring.data.gemfire.cache.server.host-name-for-clients=localhost
If I deploy this to a Centos distribution and run it with the same script but passing the "test" profile:
java -jar gemfire-server-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=test
with my test profile application-test.properties looking like this:
spring.data.gemfire.cache.server.host-name-for-clients=server.centralus.cloudapp.azure.com
I can see during startup that the server finds the Locator already running on the host (I start it through a separate process with Gfsh).
The server even joins the cluster for about a minute. But then it shuts down because of a bind exception.
I have checked to see if there is anything running on that port (40404) - and nothing shows up
EDIT
Apparently I DO get this exception locally - it just takes a lot longer.
It is almost instant when I start it up on the Centos distribution. On my Mac it takes around 2 minutes before the process throws the exception:
Adding a few more images of this:
Two bash windows - left is monitoring GF locally and right I use to check the port and start the Java process:
The server is added to the cluster. Note the timestamp of 16:45:05.
Here is the server added and it appears to be running:
Finally, the exception after about two minutes - again look at the timestamp on the exception - 16:47:09. The server is stopped and dropped from the cluster.
Did you start other servers using Gfsh? That is, with a Gfsh command similar to...
gfsh>start server --name=ExampleGfshServer --log-level=config
Gfsh will start CacheServers listening on the default CacheServer port of 40404.
You have a few options.
1) First, you can disable the default CacheServer when starting a server with Gfsh like so...
gfsh>start server --name=ExampleGfshServer --log-level=config --disable-default-server
2) Alternatively, you can change the CacheServer port when starting other servers using Gfsh...
gfsh>start server --name=ExampleGfshServer --log-level=config --server-port=50505
3) If you are starting multiple instances of your Spring Boot, Pivotal GemFire CacheServer class, then you can vary the spring.data.gemfire.cache.server.port property by declaring the property as a System property when you startup.
For instance, you can, in the Spring Boot application.properties, do...
#application.properties
...
spring.data.gemfire.cache.server.port=${gemfire.cache.server.port:40404}
And then when starting the application from the command-line...
java -Dgemfire.cache.server.port=48484 -jar ...
Of course, you could just set the SDG property from the command line too...
java -Dspring.data.gemfire.cache.server.port=48484 --jar ...
Anyway, I guarantee you that you have another process (e.g. Pivotal GemFire CacheServer) with a ServerSocket listening on port 40404, running. netstat -a | grep 40404 should give you better results.
Hope this helps.
Regards,
John

Round Robin for gRPC (nodejs) on kubernetes with headless service

I have a a 3 nodejs grpc server pods and a headless kubernetes service for the grpc service (returns all 3 pod ips with dns tested with getent hosts from within the pod). However all grpc client request always end up at a single server.
According to https://stackoverflow.com/a/39756233/2952128 (last paragraph) round robin per call should be possible Q1 2017. I am using grpc 1.1.2
I tried to give {"loadBalancingPolicy": "round-robin"} as options for new Client(address, credentials, options) and use dns:///service:port as address. If I understand documentation/code correctly this should be handed down to the c-core and use the newly implemented round robin channel creation. (https://github.com/grpc/grpc/blob/master/doc/service_config.md)
Is this how round-robin load balancer is supposed to work now? Is it already released with grpc 1.1.2?
After diving deep into Grpc-c core code and the nodejs adapter I found that it works by using the option key "grpc.lb_policy_name". Therefore, constructing the gRPC client with
new Client(address, credentials, {"grpc.lb_policy_name": "round_robin"})
works.
Note that in my original question I also used round-robin instead of the correct round_robin
I am still not completely sure how to set the serviceConfig from the service side with nodejs instead of using client (channel) option override.
I'm not sure if this helps, but this discussion shows how to implement load balancing strategies via grpc.service_config.
const options = {
'grpc.ssl_target_name_override': ...,
'grpc.lb_policy_name': 'round_robin', // <--- has no effect in grpc-js
'grpc.service_config': JSON.stringify({ loadBalancingConfig: [{ round_robin: {} }] }), // <--- but this still works
};

Using memcached failover servers in nodejs app

I'm trying to set up a robust memcached configuration for a nodejs app with the node-memcached driver, but it does not seem to use the specified failover servers when one server dies.
My local experiment goes as follows:
shell
memcached -p 11212
node
MC = require('memcached')
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212']})
c.get('foo', console.log) //this will eventually time out
c.get('foo', console.log) //repeat 5 or 6 times to exceed the retries number
//wait until all the connection errors appear in the console
//at this point, the failover server should be in use
c.get('foo', console.log) //this still times out :(
Any ideas of what might we be doing wrong?
It seems that the failover feature is somewhat buggy in node-memcached.
To enable failover you must set the remove options:
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212'],
remove : true})
Unfortunately, this is not going to work because of the following error:
[depricated] HashRing#replaceServer is removed.
[depricated] the API has no replacement
That is, when trying to replace a dead server with a replacement from the failover list, node-memcached outputs a deprecation error from the HashRing library (which, in turn, is maintained by the same author of node-memcached). IMHO, feel free to open a bug :-)
This is come when your nodejs server not getting any session id from memcached
Please check properly in php.ini file you are setting properly or not for memcached
session.save = 'memcache'
session.path = 'tcp://localhost:11212'

connect EADDRNOTAVAIL in nodejs under high load - how to faster free or reuse TCP ports?

I have a small wiki-like web application based on the express-framework which uses elastic search as it's back-end. For each request it basically only goes to the elastic search DB, retrieves the object and returns it rendered with by the handlebars template engine. The communication with elastic search is over HTTP
This works great as long as I have only one node-js instance running. After I updated my code to use the cluster (as described in the nodejs-documentation I started to encounter the following error: connect EADDRNOTAVAIL
This error shows up when I have 3 and more python scripts running which constantly retrieve some URL from my server. With 3 scripts I can retrieve ~45,000 pages with 4 and more scripts running it is between 30,000 and 37,000 pages Running only 2 or 1 scripts, I stopped them after half an hour when they retrieved 310,000 pages and 160,000 pages respectively.
I've found this similar question and tried changing http.globalAgent.maxSockets but that didn't have any effect.
This is the part of the code which listens for the URLs and retrieves the data from elastic search.
app.get('/wiki/:contentId', (req, res) ->
http.get(elasticSearchUrl(req.params.contentId), (innerRes) ->
if (innerRes.statusCode != 200)
res.send(innerRes.statusCode)
innerRes.resume()
else
body = ''
innerRes.on('data', (bodyChunk) ->
body += bodyChunk
)
innerRes.on('end', () ->
res.render('page', {'title': req.params.contentId, 'content': JSON.parse(body)._source.html})
)
).on('error', (e) ->
console.log('Got error: ' + e.message) # the error is reported here
)
)
UPDATE:
After looking more into it, I understand now the root of the problem. I ran the command netstat -an | grep -e tcp -e udp | wc -l several times during my test runs, to see how many ports are used, as described in the post Linux: EADDRNOTAVAIL (Address not available) error. I could observe that at the time I received the EADDRNOTAVAIL-error, 56677 ports were used (instead of ~180 normally)
Also when using only 2 simultaneous scripts, the number of used ports is saturated at around 40,000 (+/- 2,000), that means ~20,000 ports are used per script (that is the time when node-js cleans up old ports before new ones are created) and for 3 scripts running it breaches over the 56677 ports (~60,000). This explains why it fails with 3 scripts requesting data, but not with 2.
So now my question changes to - how can I force node-js to free up the ports quicker or to reuse the same port all the time (would be the preferable solution)
Thanks
For now, my solution is setting the agent of my request options to false this should, according to the documentation
opts out of connection pooling with an Agent, defaults request to Connection: close.
as a result my number of used ports doesn't exceed 26,000 - this is still not a great solution, even more since I don't understand why reusing of ports doesn't work, but it solves the problem for now.

Start/Stop/Restart OS *inux services using Node.js?

Sadly, we don't use node extensively at my work (yet anyway), however, I wanted to write a small "watcher" app in Node that would preform some very basic health checks on the local server and if a problem is detected I want this watcher app to restart the associated service.
For example, I want to check Apache every 5 minutes to ensure HTTP is available. If not, I'd like the app to restart httpd as well as the application server process and send an alert. I just can't find anything on the internets' that discusses restarting OS level services using Node... but with all the posts on "how to daemonize Node" I could have missed something in the clutter.
Any help/direction would be appreciated.
My particular setup:
OS: Cent OS 6.3
Node: v0.10.19
As Brad pointed out, http://nodejs.org/api/child_process.html was indeed the answer.
In short:
var exec = require( 'child_process' ).exec
exec( 'service httpd stop', function( error, stdout, stderr ){ } );
But please read the docs. :)

Resources