OpenLDAP: wait for latest search request of client - search

We have a customer with an OpenLDAP directory connected to a PostgreSQL. The 3rd Party Phone Client they are using is using the LDAP to search for company-contacts.
While entering the name in the search-field of the client it immediately starts sending the search-request to the OpenLDAP server with nearly every new letter, but not before it received the results of the last search.
This is slowing down the search.
If you search for "someone" then the client is sending this:
cn=s* -- waits for the result...
cn=so* -- waits for the result...
cn=some* -- waits for the result...
cn=someone* -- waits for the result...
This can take up to 30 seconds until the customer will see the result for his search. The maximum number of results is set to 50 in the client (and the customer doesn't want to change this). And there is no option to delay the search in the client.
So my question is, if there is an option (or middleware, or something) which can force the OpenLDAP Server to wait for the latest search request of a client...
Thanks alot.

No you cannot force the LDAP server to wait. It doesn't makes sense. The UI is the one that sends requests, too soon or too frequently.
The server receives a request and doesn't know whether there will be another one or not. It just starts processing it and try to return the result as soon as possible.

Related

How to use socket.io properly with express app

I wonder how do I use socket.io properly with my express app.
I have a REST API written in express/node.js and I want to use socket.io to add real-time feature for my app. Consider that I want to do something I can do just by sending a request to my REST API. What should I do with socket.io? Should I send request to the REST API and send socket.io client the result of the process or handle the whole process within socket.io emitter and then send the result to socket.io client?
Thanks in advance.
Question is not that clear but from what I'm getting from it, is that you want to know what you would use it for that you cant already do with your current API?
The short answer is, well nothing really.. Websockets are just the natural progression of API's and the need for a more 'real-time' interface between systems.
Old methods (and still used and relevant for the right use case) is long polling where you keep checking back to the server for updated items and if so grab them.. This works but it can be expensive in terms of establishing a connection, performing a lookup, then closing a connection.
websockets keep that connection open, allowing both the client and server to communicate real time. So for example, lets say you make an update to your backend data and want users to get that update, using long polling you would rely on each client to ping back to the server, check if there is an update and if so grab it. This can cause lags between updates, some users have updated data while other do not etc.
Now, take the same scenario with websockets, you make an update to the backend data, hit submit, this then emits to your socket server. Socket server takes the call, performs the task ( grabs updated data ) and emits it to the users, each connected user instantly gets that update.
Socket servers are typically used for things like real time chats or polling where packets are smaller but they are also used for web games etc. Depending on the size of your payloads will determine how best to send data back and forth because the larger the payload the more resources / bandwidth it will take on the socket server so its something to consider.

Sending a response after jobs have finished processing in Express

So, I have Express server that accepts a request. The request is web scraping that takes 3-4 minute to finish. I'm using Bull to queue the jobs and processing it as and when it is ready. The challenge is to send this results from processed jobs back as response. Is there any way I can achieve this? I'm running the app on heroku, but heroku has a request timeout of 30sec.
You don’t have to wait until the back end finished do the request identified who is requesting . Authenticate the user. Do a res.status(202).send({message:”text});
Even though the response was sended to the client you can keep processing and stuff
NOTE: Do not put a return keyword before res.status...
The HyperText Transfer Protocol (HTTP) 202 Accepted response status code indicates that the request has been accepted for processing, but the processing has not been completed; in fact, processing may not have started yet. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.
202 is non-committal, meaning that there is no way for the HTTP to later send an asynchronous response indicating the outcome of processing the request. It is intended for cases where another process or server handles the request, or for batch processing.
You always need to send response immediately due to timeout. Since your process takes about 3-4 minutes, it is better to send a response immediately mentioning that the request was successfully received and will be processed.
Now, when the task is completed, you can use socket.io or web sockets to notify the client from the server side. You can also pass a response.
The client side also can check continuously if the job was completed on the server side, this is called polling and is required with older browsers which don't support web sockets. socket.io falls back to polling when browsers don't support web sockets.
Visit socket.io for more information and documentation.
Best approach to this problem is socket.io library. It can send data to client send whenever you want. It triggers a function on client side which receives the data. Socket.io supports different languages and it is really ease to use.
website link
Documentation Link
create a jobs table in a database or persistant storage like redis
save each job in the table upon request with a unique id
update status to running on starting the job
sent HTTP 202 - Accepted
At the client implement a polling script, At the server implement a job status route/api. The api accept a job id and queries the job table and respond with the status
When the job is finished update the job table with status completed, when the jon is errored updated the job table with status failed and maybe a description column to store the cause for error
This solution makes your system horizontaly scalable and distributed. It also prevents the consequences of unexpected connection drops. Polling interval depends on average job completion duration. I would recommend an average interval of 5 second
This can be even improved to store job completion progress in the jobs table so that the client can even display a progress bar
->Request time out occurs when your connection is idle, different servers implement in a different way so timeout time differs
1)The solution for this timeout problem would be to make your connections open(constant), that is the connection between client and servers should remain constant.
So for such scenarios use WebSockets, which ensures that after the initial request and response handshake between client and server the connection stays open.
there are many libraries to implement realtime connection.Eg Pubnub,socket.io. This is the same technology used for live streaming.
Node js can handle many concurrent connections and its lightweight too, won't use many resources too.

One API call vs multiple

I have a process in the back-end which will take take on average 30 to 90 seconds to complete.
Is it better to have a font-end react app make ONE API call and wait for back-end to complete and process and return the data. Or is it better to have the front-end make multiple calls, lets say every 2 seconds to check if the process and complete and get back the result?
Both are valid approaches. You could also report status changes with websocket so there's no need for polling.
If you do want to go the polling route, the general recommendation is to:
Return 202 accepted from your long-running process endpoint.
Also return a Link header with a url to where the status of the process can be read.
The client can then follow that client and ping it every x seconds.
I think it's not good to make a single API call and wait for 30-90 seconds to get a response. Instead send a response immediately mentioning that the request is successful and would be processed.
Now you can use web sockets or library like socket.io so that the server can communicate directly to the client once the requested processing is complete.
The multiple API calls to check if server is done or server has any new message is called polling and is not much efficient but it is still required in old browsers which don't support web sockets. Socket.io support s polling automatically in old browsers.
But, yes if you want you can do multiple calls to check if server is done processing, but I would prefer server to communicate back to the client , it is better.

Notifying a browser about events on server

I have a java based web application(struts 1.2). I have a requirement to display a status on the frontend (jsp). Now the status might change which my server gets notified by another server. But I want this status change to be notified to the browser.
I don't want to make a refresh at intervals. Rather I have to implement something like done in gmail chat, ie. the browser gets notified by changing events on the server.
Any ideas on how to go about this?
I was thinking on lines of opening a request to server for status, and at the server end I would hold the request and wouldn't respond back until there is a status change. Any pointers, examples on this?
Best possible solution will be to make use of XMPP protocol. It's standardized and a lot of open source solutions will get you started within minutes. You can use combination of Smack, StropheJS and Openfire to get your java based app work as desired.
There's a method called Long Polling (Comet). It basically sends a request to the server. The request thread created on the server simply waits for new data for the user, with a time limit of maybe 1 minute or more. When new data is available it is returned.
The main problem is to tackle the server-side issue, you don't want to have one thread for every user just waiting for new data. Of course you could use some asynchronous methods depending on your back-end.
Ref: http://en.wikipedia.org/wiki/Push_technology
Alternative way would be to use WebSockets. The problem is that it's not supported by all browsers today.

Socket connection on iPhone (IOS 4.x)

I am working on a Chatting application (needs to connect to a server) on iPhone. The sending packet from iPhone shouldn't be a problem.
But I would like to know whether it is possible for iPhone to establish a incoming socket connection to server continuously or forever under mobile environment.
OR What do I need to do to give the connection alive ? Need to send something over it to keep it alive ?
Thanks.
Not sure why you want to have chatting app to have persisted connection... I'd better use SMS like model. Anyways, Cocoa NSStream is based on NSSocket and allows a lot of functionality. Take a look at it.
Response to the question. Here is in a nutshell, what I would do:
Get an authentication token from the server.
this will also take care of user presence if necessary but now we are talking about the state; once presence is known, the server may send out notifications to clients that are active and have a user on their contact list.
Get user's contact list and contact presence state.
When a message send, handle it according to addressee state, i.e. if online, communicate back to the other user, if offline, queue for later delivery or reject.
Once token expires, reject communication with appropriate error and make the client to request a new token.
Communication from server to client, can be based on pull or push model. In first case, client periodically makes a request and fetches all messages. This may sound not good but in reality, how often users compose and send messages? Several times a minute? That's not too much. So fetching may happen every 5-10 seconds.
For push model, client must be able to listen and accept connections.
Finally, check out SIP, session initiation protocol. No need to use full version of it though. Just basic stuff.
This is very rough and perhaps simplified. I don't know the target complexity of your chatting system. For example, the simplest thing can also be that server just enables client to client communication by distributing their end points and clients take care of everything themselves.
Good luck!
Super out of date response, but maybe it will help the next person.
I would use xmppframework and a jabber server.

Resources