The API endpoint I need to access provides live streaming option only. But the need is for a regular non streaming API. Using the request node module can I achieve this?
You can hook up to the stream on your server and store data that arrives in the stream locally on the server in a database and then when a REST request comes in for some data, you look in your local database and satisfy the request from that database (the traditional, non-streaming way).
Other than that, I can't figure out what else you might be trying to do. You cannot "turn a streaming API into a non-streaming one". They just aren't even close to the same thing. A streaming API is like subscribing to a feed of information. You don't make a request, new data is just sent to you when it's available. A typical non-streaming API is that a client makes a specific request and the server responds with data for that specific request.
Here's a discussion of the Twitter streaming API that might be helpful: https://dev.twitter.com/streaming/overview
Related
Objective
I need to show a big table of data in my React web app frontend.
My backend is an Express server with a GraphQL layer and a few "normal" endpoints.
My server gets data from various sources, including an external API, which is the data source for my current task.
My server has a database that I can use freely. I cannot directly access the external API from my front end.
The data all comes from the external API I mentioned. In fact, it comes from multiple similar calls to the same endpoint with many different IDs. Each of those individual calls takes a while to return but doesn't risk timing out.
Current Solution
My naive implementation: I do one GraphQL query in which the resolver does all the API calls to the external service in parallel. It waits on them all to complete using Promise.all(). It then returns a big array containing all the data I need to my server. My server then returns that data to me.
Problem With Current Solution
Unfortunately, this sometimes leaves my frontend hanging for too long and it times out (takes longer than 2 minutes).
Proposed Solution
Is there a better way than manually implementing long polling in GraphQL?
This is my main plan for a solution at the moment:
Frontend sends a request to my server
Server returns a 200 and starts hitting the external API, and sets a flag in the database
Server stores the result of each API call in the database as it completes
Meanwhile, the frontend shows a loading screen and keeps making the same GraphQL query for an entity like MyBigTableData which will tell me how many of the external API calls have returned
When they've all returned, the next time I ask for MyBigTableData, the server will send back all the data.
Question
Is there a better alternative to GraphQL long polling on an Express server for this large request that I have to do?
An alternative that comes to mind is to not use GraphQL and instead use a standard HTTP endpoint, but I'm not sure that really makes much difference.
I also see that HTTP/2 has multiplexing which could be relevant. My server currently runs HTTP/1.1 and upgrading is something of an unknown to me.
I see here that Keep-Alive, which sounds like it could be relevant, is unusable in Safari which is bad as many of my users use Safari to access the frontend.
I can't use WebSockets because of technical restraints. I don't want to set a ridiculously long timeout on my client either (and I'm not sure if it's possible)
I discovered that GraphQL has polling built in https://www.apollographql.com/docs/react/data/queries/#polling
In the end, I made a REST polling system.
I am new to socket.io. I have a backend server implemented on nodejs which makes multiple async calls to rest api and send the response back to the client on angular 4.0. I follow the simple http request methodology. However, the issue is that it takes a while for the server to get collect and parse data from each multiple apis and send it back to the client. Therefore, I wanted to implement streaming which will return data to the client as soon as it gets from anywhere. I wanted to know if this can be done efficiently using web sockets and socket.io or is there another way around to do this.
I know, we can make async calls from the client but i want to implement the logic on the server rather than the client.
Note: I donot have realtime data
I am using Twitter's Streaming API in my NodeJS application. I am starting and stopping the server in my developers machine. My question is that does the API fetches new data every time? Does it return duplicate data on starting/stopping the NodeJS server?
Every Twitter request fetches new data; however, depending on the specific request, the new data may be the same as or different from the old data.
There is nothing called "live search API." You may be referring to Twitter's Streaming API. If that's the case, then you would not receive duplicate data.
After an all day research on node.js real-time frameworks/wrappers (derby.js, meteor,
socketIO...) I realised, that the more old-fashioned (sorry) way of a restful API
fits all my needs.
One of the reasons I thought I have to use an ongoing socket connection was because I want to
stream my MongoDB documents from the database instead of loading them all into memory on the server. I think this is the recommended way because it minimizes the use of server ressources.
But here is the problem:
Does a simple document query streaming work with the ordinary HTTP request/response
model or do we have to establish an ongoing socket-connection to stream all documents to the client?
Note: I only have to load the documents on an ajax call - without the need to have new
documents to be pushed to the client (so really no need to be realtime).
Is there anything special to consider?
You can stream the results of the query using the standard HTTP request/response APIs.
The general sequence of calls is:
res.writeHead(<header content>)
res.write(<data>)
...
res.write(<data>)
res.end();
But you make those calls asynchronously, driven by the streaming events from your query.
Today, i had the idea of the following setup. Create a nodejs server along with express and socket.io. With express, i would create a RESTful API, which is connected to a mongo. BackboneJS or similar would connect the client to that REST API.
Now every time the mongodb(ie the data in it iam interested in) changes, socket.io would fire an event to the client, which would carry a courser to the data which has changed. The client then would trigger the appropriate AJAX requests to the REST to get the new data, where it needs it.
So, the socket.io connection would behave like a synchronize trigger. It would be there for the entire visit and could also manage sessions that way. All the payload would be send over http.
Pros:
REST API for use with other clients than web
Auth could be done entirely over socket.io. Only sending token along with REST requests.
Use the benefits of REST.
Would also play nicely with pub/sub service like Redis'
Cons:
Greater overhead, than using pure socket.io.
What do you think, are there any great disadvantages i did not think of?
I agree with #CharlieKey, you should send the updated data rather than re-requesting.
This is exactly what Tower is doing:
save some data: https://github.com/viatropos/tower/blob/development/src/tower/model/persistence.coffee#L77
insert into mongodb (cursor is a query/persistence abstraction): https://github.com/viatropos/tower/blob/development/src/tower/model/cursor/persistence.coffee#L29
notify sockets: https://github.com/viatropos/tower/blob/development/src/tower/model/cursor/persistence.coffee#L68
emit updated records to client: https://github.com/viatropos/tower/blob/development/src/tower/server/net/connection.coffee#L62
The disadvantage of using sockets as a trigger to re-request with Ajax is that every connected client will have to fetch the data, so if 100 people are on your site there's going to be 100 HTTP requests every time data changes - where you could just reuse the socket connections.
I think that pushing the updated data with the socket.io event would be better than re-requesting the lastest. Even better you could only push the modified pieces of data decreasing the amount of data sent over the line. Overall though a interesting idea.
I'd look into Now.js since it does pretty much exactly what you need.
It creates a namespace which is shared among the client and server. The server can call functions on the client directly and vice versa.
That is if you insist on your current infrastructure decision to use MongoDB and Node.js, otherwise there would be CouchDB which is a full web server and document database with sophisticated replication mechanisms built-in.