Invoking Worklight adapter from non-worklight applications - security

When deploying adapters (be it HTTP, SQL, JMS or CastIron) to the Worklight Server in WebSphere application server, I believe we can invoke the adapters externally from any non-Worklight applications as below.
http://localhost:8080/invoke?adapter=ADAPTER_NAME&procedure=PROCEDURE_NAME&parameters=[PARAMETER1,PAREMETER2,...]
As noticed from this thread:
https://www.ibm.com/developerworks/forums/thread.jspa?threadID=453422
What are the pros and cons of using this approach? Is it really recommended?

Advantages:
Its easy to access from multiple application;accessing adapters URL and passing parameters.
Disadvantages:
Easy to hack the enabled authentication frameworks
Workaround:
I faced the same situation and i overcame it by injecting my custom listeners on server that listen every request and then based on my criteria, it forwards to adapter or worklight app. In this way i can prevent outside access.
There is another way to use a custom authentication model.
http://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v506/08_04_Custom_Authenticator_and_Login_Module.pdf
http://www.ibm.com/developerworks/mobile/worklight/getting-started.html

Ease of use is the biggest pro and security is the biggest con.
To be able to invoke a procedure in that fashion, your adapter must be free of any security tests (wl_unprotected). If your Worklight host and port are open to the internet (which is very likely), anyone having a whiff of the adapter name, procedure name etc. can invoke your adapter.

Related

Integrating real-time components into REST backend

I am implementing a product that will be accessible via web and mobile clients, and am doing thorough research to make sure that I have chosen a good set of tools before I begin. For front-end, I am using AngularJS (Angularjs + angular-ui on web, ionic + cordova on mobile), and because I want to have a single backend serving all types of clients, I plan on implementing a RESTful service (likely one that accepts and returns JSON data). I am leaning towards using Mongo, Node, and Express to create this RESTful API, but am open to suggestions on that front.
But the sticking point for me right now is this: certain parts of the application (including, for example, a live chat/messaging section) need to be real-time. I am aware of the various technologies and protocols for implementing real-time web services (webhooks, websockets, long polling, etc.) and the libraries and frameworks that implement them and expose that functionality (SockJS, Socket.io, etc.) and I want to be clear that I am not asking one of those "what is the best framework" types of questions.
My question is rather about the correct way to implement these two kinds of services side-by-side. Should I be serving the chat separately from the rest of the application? Or is there a clean way to integrate these two different protocols into the same application?
The express framework is quite modular so it can sit side by side with a websocket module if you so wish. The most common reason for doing this is to share authentication routines across http and websockets by using the same session store in both modules.
For example you would authenticate a user by http with the express framework when they login, which will allow access to your chat application. From then on you would take advantage of the realtime and speedy protocol of websockets and on your server code you will check the cookie that the client sends with the socket message and check that the request corresponds to an authenticated session from before.
Many websites use websockets for chat or other push updates, and a separate RESTful API over AJAX, delivered to the same page. There are great reasons to leave RESTful things as they are, particularly if caching is an issue--websockets won't benefit from web caches outside your servers. Websockets are better suited for chat on any modern browser, which trades a small keep-alive for a reconnecting long-poll. So two separate interfaces adds a little complexity that you may benefit from, when scaling and cost-per-user are considered.
If your app grows enough to require this scaling, you'll find this actually simplifies things greatly--clients in the same chat groups can map to the same server, and a load balancer can distribute RESTful calls appropriately.
If you are looking for one communication protocol to serve both needs (calling the server from the client, as well as pushing data from the server), you might have a look at WAMP.
WAMP is an open WebSocket subprotocol that provides two application
messaging patterns in one unified protocol: Remote Procedure Calls +
Publish & Subscribe.
If you want to dig a little deeper, this describes the why, the motivation and the design. WAMP has multiple implementations in different languages.
Now, if you want to stick to REST, then you cannot integrate push at the protocol level (since REST simply does not have that), but only at "framework level". You need a 2nd protocol. The options are:
WebSocket
Server Sent Events (SSE)
HTTP Long-Poll
SSE in a way could be a good complement to REST. However, it's unsupported on IE (not even IE11), and it's unclear if it ever will be.
WebSocket obviously works, but then why not have it all running over WebSocket? (This line of thinking leads to WAMP).
So IMO the natural complement for REST would be some HTTP Long-poll based mechanism for simulating push. You can make HTTP Long-poll work robustly. You'll have to live with the inefficiencies and limitations of HTTP (for use cases like this) with this solution then.
You could use a hosted real-time messaging (and even storage) service and integrate it into your frontend apps (web and mobile). These services leverage the websocket protocol and normally include HTTP Comet fallbacks.
The cool thing is that you don't need to manage the underlying infrastructure in terms of high-availability and unlimited scalability and focus only on developing a great app.
I work for Realtime so i'm a bit biased but I think the Realtime Framework could help you. More at http://framework.realtime.co

Can I use Jmeter for testing the performance of the application on IIS server?

I'm using IIS server for my web application. Can I use Jmeter for testing the performance of the application on IIS server?
JMeter supports any web application and it doesn't care about underlying back-end technology as it operates on HTTP protocol level basically sending requests and awaiting for responses.
For ASP.NET applications (if only you aren't testing another application type via ISAPI) there could be some challenges connected with mandatory dynamic parameters like VIEWSTATE or EVENTVALIDATION or whatever. See ASP.NET Login Testing with JMeter guide for workaround on common situations.

Using ServiceClient in an optimal way

I have a service that exposes a JSON-over-HTTP API (that uses ServiceStack) and now I am writing a .NET client (dll) that abstracts away this API to basically provide a domain-specific object abstraction on top of it. This client will be used by apps that need to hit the service a lot so striving for high-throughput and low-latency is important. I've done quite a bit of optimization work on the service to get it to acceptable levels of throughput and latency (measured those using JMeter). Naturally I would like to make the client as fast as possible.
I would like to use ServiceStack's ServiceClient library to handle the communication with the service and have a couple of questions:
Is the JsonServiceClient thread safe?
Does the JsonServiceClient (or any of the *ServiceClientBase children) do some sort of connection pooling when they talk to HTTP v1.1 servers? Or do they open a separate connection for each request they are asked to perform?
I would also appreciate any suggestions on how to use ServiceClient or perhaps maybe another library in order to make the communication with the service optimal (client-side caching is part of my plan so that's already in the works).

Client-server communication on same machine via file or socket

I have a client-server relationship between two apps: a web application and an OCX. What I want to do is communicate the client part of the web application, running on the local PC, with the OCX, also installed in the same PC. The server app (the OCX) is not mine (I can't change its source code) and offers 2 ways of communicating with client apps: through an intermediate file or through a socket. There are lot of restrictions in the PCs where the apps have to be executed (the users, for example, are not administrators of their own PCs) so it's even more difficult than it seems. My doubt is which technology would be better to handle this communication from the cliente app (JavaScript, Java Applets, another OCX, etc.) and which option could be handled easier (file or sockets) by this technologies. And also which would be the security and permissions settings that should be taken into account to make it all work properly. You must know that, in case of using an intermediate file, I must be able to write in specific positions of that file from the web app (I'm not sure if Javascript's FileSystemObject can do this, for example). Thanks in advance.
Working with Sockets is realy easy. I only don't know the security options of sockets. May be you can take a look here: Oracle Sockets

WCF and client communication on a self hosted WCF service

I am new to WCF services. I have been working with WCF for over two months now and love its capabilities. I am using a self hosted WCF in a Windows Service. The binding is netTCP because the client and service are on the same machine. My communication is duplex and I am using a WCF session. With these features, one of the design needs for my application is that UI should always be connected to the service - I am using a separate thread in my UI to always poll the connection status and re-create and open the channel in case it goes to faulted state. Since I have async call backs from the service, the client should always be connected. Here are a couple of questions:
Is it OK to use self host technique knowing that the client and service on the same machine? I used WCF for ease of inter process communication.
Does it make sense to keep this keep alive thread from the client or should I be using some other technique?
I want to get better in using and configuring WCF. is there a good book or online reading material on self hosted WCF services?
Please advice.
Thanks,
Subbu
I think it's absolutely fine to use self-hosting with WCF. I've implemented many services that are hosted in a Windows Service for example.
I'm assuming that you're talking about client and server being hosted in different processes on the same machine? If so, then ideally you should use binary over named pipes in your bindings.
If client and server and physically in the same process, then you might consider using something like Roman Kiss's Null Transport to reduce the serialization overhead. His CodeProject article can be found here: http://www.codeproject.com/KB/WCF/NullTransportForWCF.aspx
To answer point 2, I've suggested an alternative approach in my answer to another Stackover question: WCF net.tcp server disconnects - how to handle properly on client side?
Hope this helps.

Resources