Hazelcast and Spring Session: REST API returns epmty values - hazelcast

I try to integrate Spring Session and Hazelcast. I am using very simple configuration:
com.hazelcast.config.Config cfg = new com.hazelcast.config.Config();
NetworkConfig netConfig = new NetworkConfig();
netConfig.setPort(SocketUtils.findAvailableTcpPort());
System.out.println("Hazelcast port #: " + netConfig.getPort());
cfg.setNetworkConfig(netConfig);
SerializerConfig serializer = new SerializerConfig().setTypeClass(Object.class)
.setImplementation(new ObjectStreamSerializer());
cfg.getSerializationConfig().addSerializerConfig(serializer);
return Hazelcast.newHazelcastInstance(cfg);
It is from Spring docs example. Everything ok, but when I try to get session from Hazelcast with its Rest APi it returns empty values 0curl: (52) Empty reply from server
$ curl -X GET http://localhost:port/hazelcast/rest/maps/spring:session:sessions/session-id
Where port is port, selected with SocketUtils.findAvailableTcpPort() and session-id is session id in browser.
How I can access my saved sessions with Hazelcast REST API?
Update:
By adding cfg.setProperty("hazelcast.rest.enabled","true"); all problems disappeared.

You have to activate the REST API service which is disabled by default (for security reasons). Please see http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#system-properties and search for hazelcast.rest.enabled.

Related

Connection configuration loops - Prosys OPC UA Client

I'm using sample codes from documentation and I'm trying to connect to server using Prosys OPC UA Client. I have tried opcua-commander and integration objects opc ua client and it looks like server works just fine.
Here's what is happening:
After entering endpointUrl, client adds to url -- urn:NodeOPCUA-Server-default.
Client asks to specify security settings.
Client asks to choose server - only 1 option and it's urn:NodeOPCUA-Server-default.
And it goes back to step 2 and 3 over and over.
If I just minimize prosys client without closing configuration after some time I get this info in terminal:
Server: closing SESSION new ProsysOpcUaClient Session15 because of timeout = 300000 has expired without a keep alive
\x1B[46mchannel = \x1B[49m ::ffff:10.10.13.2 port = 51824
I have tried this project and it works -> node-opcua-htmlpanel. What's missing in sample code then?
After opening debugger I have noticed that each Time I select security settings and hit OK, server_publish_engine reports:
server_publish_engine:179 Cencelling pending PublishRequest with statusCode BadSecureChannelClosed (0x80860000) length = 0
This is due to a specific interoperability issue that was introduced in node-opcua#0.2.2. this will be fixed in next version of node-opcua. The resolution can be tracked here https://github.com/node-opcua/node-opcua/issues/464
The issue has been handled at the Prosys OPC Forum:
The error happens because the server sends different
EndpointDescriptions in GetEndpointsResponse and
CreateSessionResponse.
In GetEndpoints, the returned EndpointDescriptions contain
TransportProfileUri=http://opcfoundation.org/UA-Profile/Transport/uatcp-uasc-uabinary.
In CreateSessionResponse, the corresponding TransportProfileUri is
empty.
In principle, the server application is not working according to
specification. The part 4 of the OPC UA specification states that “The
Server shall return a set of EndpointDescriptions available for the
serverUri specified in the request. … The Client shall verify this
list with the list from a DiscoveryEndpoint if it used a
DiscoveryEndpoint to fetch the EndpointDescriptions. It is recommended
that Servers only include the server.applicationUri, endpointUrl,
securityMode, securityPolicyUri, userIdentityTokens,
transportProfileUri and securityLevel with all other parameters set to
null. Only the recommended parameters shall be verified by the
client.”

How to set proxy user in Livy Job submit through its Java API

I am using Livy's Java API to submit a spark job on YARN on my cluster. Currently the jobs are being submitted as 'livy' user, but I want to submit the job as a proxy user from Livy.
It is possible to do this by sending POST request to the Livy server, by passing a field in the POST data. I was thinking if this could be done by Livy's Java API.
I am using the standard way to submit a Job:
LivyClient client = new LivyClientBuilder()
.setURI(new URI(livyUrl))
.build();
try {
System.err.printf("Uploading %s to the Spark context...\n", piJar);
client.uploadJar(new File(piJar)).get();
System.err.printf("Running PiJob with %d samples...\n", samples);
double pi = client.submit(new PiJob(samples)).get();
System.out.println("Pi is roughly: " + pi);
} finally {
client.stop(true);
}
Posting answer to my own question.
Currently there is no way to set the proxy user through the LivyClientBuilder.
A workaround for this is:
Create the session through the REST API (POST request to < livy-server >/session/ ) and read the session ID from the request's response. Proxy user can be set via the REST API by passing it in the POST data: {"kind": "spark", "proxyUser": "lok"}
Once the session is created, connect to it using the ID via LivyClientBuilder ( livyURL would be < livy-server >/sessions/< id >/ ).

Round Robin for gRPC (nodejs) on kubernetes with headless service

I have a a 3 nodejs grpc server pods and a headless kubernetes service for the grpc service (returns all 3 pod ips with dns tested with getent hosts from within the pod). However all grpc client request always end up at a single server.
According to https://stackoverflow.com/a/39756233/2952128 (last paragraph) round robin per call should be possible Q1 2017. I am using grpc 1.1.2
I tried to give {"loadBalancingPolicy": "round-robin"} as options for new Client(address, credentials, options) and use dns:///service:port as address. If I understand documentation/code correctly this should be handed down to the c-core and use the newly implemented round robin channel creation. (https://github.com/grpc/grpc/blob/master/doc/service_config.md)
Is this how round-robin load balancer is supposed to work now? Is it already released with grpc 1.1.2?
After diving deep into Grpc-c core code and the nodejs adapter I found that it works by using the option key "grpc.lb_policy_name". Therefore, constructing the gRPC client with
new Client(address, credentials, {"grpc.lb_policy_name": "round_robin"})
works.
Note that in my original question I also used round-robin instead of the correct round_robin
I am still not completely sure how to set the serviceConfig from the service side with nodejs instead of using client (channel) option override.
I'm not sure if this helps, but this discussion shows how to implement load balancing strategies via grpc.service_config.
const options = {
'grpc.ssl_target_name_override': ...,
'grpc.lb_policy_name': 'round_robin', // <--- has no effect in grpc-js
'grpc.service_config': JSON.stringify({ loadBalancingConfig: [{ round_robin: {} }] }), // <--- but this still works
};

Jetty SPNEGO/SSO gives NPE. Expected cause krn5.ini?

We're facing an issue where Jetty SPNEGO gives an NPE inside SpnegoLoginService.login()
The gssContext.getSrcName() call returns null.
The SPN is: HTTP/machine.dd.aa.net#EE.AA.NET
Must there be a special setup in the KRB5.INI file when dd.aa.net != EE.AA.NET ?
The only clue i found with Google is this warning message from some online source code:
if (gssContext.isEstablished()) {
if (gssContext.getSrcName() == null) {
log.warn("GSS Context accepted, but no context initiator recognized. Check your kerberos configuration and reverse DNS lookup configuration");
return false;
}
Our client-setup is
Internet-explorer browser, setup for negotiate/spnego
login using Windows SmartCard
Our server-setup is
Java 8u45
Jetty 9
using org.eclipse.jetty.security.SpnegoLoginService
We used java kinit on the server to validate against the keytab and also against the DC. which went ok. Also the reverse DNS zones are working.
is there a possibility that the 'service request token' generated by the client browser (logged in with smartcard) doesn't supply the context initiator / client principle name ?
Thanks
The nullpointer was gone when we went from Java 1.8u45 to Java 1.8u60
Turns out the server side didn't check all tickets provided by the client, so didn't find the correct one.
Below the bug entry:
[JDK-8078439] SPNEGO auth fails if client proposes MS krb5 OID

ServiceStack Facebook Authentication NullReference Exception on Vagrant Box (Ubuntu/MySql/Mono/nginx)

Long shot I guess, with the lack of real information that I am offering at this stage. I'll gladly offer up some more details on how to reproduce the issue - but wanted some fast feedback to see if there was a gotcha somewhere I was missing.
I've a simple ServiceStack hello world application, in which I'm playing with the Facebook Auth Provider:
Vanilla ServiceStack
Vanilla Facebook Auth Proivider
Vanilla User Session
Vanilla OrmLite User Repository
Vanilla OrmLite MySql Db Factory
When debugging on my local machine - on Windows 7 (and 8); everything works a treat. The service launches, the database tables are created and I can login via Facebook and records are inserted to the relevant tables.
When running the service on Ubuntu inside a Vagrant Box (running in Virtual Box as the provider for virtualization, hosted on nginx with mono-fastcgi) - the service launches correctly and I can see that the tables are created in the MySql database. When I hit /auth/facebook I am correctly forwarded to Facebook - but I hit an error when the callback to the service occurs.
This is the current output:
[Auth: 07/30/2013 13:02:47]: [REQUEST: {provider:facebook}] System.NullReferenceException: Object reference not set to an instance of an object at
ServiceStack.ServiceInterface.Auth.FacebookAuthProvider.Authenticate (ServiceStack.ServiceInterface.IServiceBase,ServiceStack.ServiceInterface.Auth.IAuthSession,ServiceStack.ServiceInterface.Auth.Auth) <0x0061e> at
ServiceStack.ServiceInterface.Auth.AuthService.Authenticate (ServiceStack.ServiceInterface.Auth.Auth,string,ServiceStack.ServiceInterface.Auth.IAuthSession,ServiceStack.ServiceInterface.Auth.IAuthProvider) <0x000a7> at
ServiceStack.ServiceInterface.Auth.AuthService.Post (ServiceStack.ServiceInterface.Auth.Auth) <0x00303> at
ServiceStack.ServiceInterface.Auth.AuthService.Get (ServiceStack.ServiceInterface.Auth.Auth) <0x00013> at (wrapper dynamic-method) object.lambda_method (System.Runtime.CompilerServices.Closure,object,object) <0x0004f> at
ServiceStack.ServiceHost.ServiceRunner`1<ServiceStack.ServiceInterface.Auth.Auth>.Execute (ServiceStack.ServiceHost.IRequestContext,object,ServiceStack.ServiceInterface.Auth.Auth) <0x00416>
It is clearly reaching the Service (which I'm accessing via localhost:8080 which maps through to the guest machine on port 80); as the error is wrapped nicely in ServiceStack output.
I don't suppose anyone has any clues?
Okay after an evening of investigation - I've found the root cause.
Line 51 of FacebookAuthProvider.cs calls off to Line 28 of WebRequestExtensions.cs - which in turn calls Line 227 of WebRequestExtensions.cs.
This method call fails at line 255-ish - essentially because Mono by default doesn't trust any SSL certificates by default: as explained here..
Instead of figuring out the correct configuration for Mono - I've taken the nasty route (for the time being at least); of using the following line in my AppHostBase.Configure implementation:
F#
System.Net.ServicePointManager.ServerCertificateValidationCallback <- new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)
C#
System.Net.ServicePointManager.ServerCertificateValidationCallback += (a, b, c, d) => { return true; };
I am now up and running (like a fully-operational Death Star).

Resources