I have a legacy app(HTTP and raw TCP) that uses traditional BIO (blocking IO) and I'd like to start replacing it with Netty.
How does Netty work with traditional BIO clients? Are there any issues if I first replace the server component with Netty and leave the BIO clients in place?
Additionally, can a Netty built server replace a typical HTTP Web Server intended to server Browser clients? Any issues there?
Thanks
My understanding is that netty supports blocking (org.jboss.netty.channel.socket.oio) and non blocking (org.jboss.netty.channel.socket.nio) operations. See http://docs.jboss.org/netty/3.2/guide/html/architecture.html section 2.2.
It is easy to switch between blocking and non-blocking so you can try with NIO and if that does not with with your clients, you can switch to OIO. You set the type of IO you wish to support with you setup to ChannelFactory
// NIO - non blocking
ChannelFactory factory =
new NioSeverSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
//OIO - blocking
ChannelFactory factory =
new OioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
There are a number existing netty based HTTP web server/framework already implemented. For example, webbit, xitrum and play framework. I'm sure there are more. These are just the ones I can think of off the top of my head.
If you wish to implement your own, a good starting point is the examples in the org.jboss.netty.example.http package.
Related
I have a C++ client binary that uses epoll() to block on various OS descriptors - for file I/O, socket I/O, OS timers; and now it also needs to be a gRPC client (including streaming replies).
Reading answers to related questions across the web (e.g. about servers) it appears that there is no easy way from C/C++ to ask gRPC to give an fd that can be incorporated into the existing epoll set and then trigger gRPC to do the read & processing for the incoming response. Is that correct?
Alternatively, is the reverse possible: to use files, socket and timers via the gRPC core iomgr framework that are otherwise unrelated to the gRPC service? (for reading local files, communicating with external network equipment and managing the client's internal high-frequency timer needs.
The client in question is a single thread with RT priority (on an embedded (soft) real-time system using the PREEMPT RT). Given that gRPC creates other threads, could that be a problem?
Unfortunately, this isn't possible today, but it will be in the future once we finish the EventEngine effort. Once ready, that will allow you to inject your own event loop into gRPC. However, that probably won't be ready for public use until sometime next year.
For now, the only suggestion I can offer is that if you're using an insecure channel and don't need any name resolution or load balancing functionality, you may be able to use CreateInsecureChannelFromFd() to do the reverse (provide your own fd to use as a gRPC connection).
assumptions:
There are microservices behind an api-gateway, they communicate through HTTP synchronously. obviously, each one of those microservices is a web server. now I want my microservice to play as a Kafka producer and "consumer" too. more clearly, my microservice produces events and listens to some topics for other events.
problem:
It seems traditionally a Kafka consumer use an infinite loop to poll the messages and stay alive (send heart-beats) my process thread is busy by serving host and cant be locked by an infinite loop. is there any way to somehow "listen" to the topic without block the thread just like pulsar listener or rabbit? I prefer not to separate my web server and consumer-processor due to the technical complexity I have to handle in development. may I use Kafka streams in this case? Is there something wrong with my assumptions?
You could just put your Consumer in its own Thread.
Assuming Java since you mentioned Kafka Streams, then if you're using Quarkus, Micronaut, Spring-Kafka, Reactor, Vertx, or similar Java web framework that also have Kafka connectors, that's what they do. If you're not using a web framework beyond basic Jersey, you probably should.
Even with Clojure, just abstract the Thread away like https://github.com/gklijs/bkes-demo/blob/main/topology/src/nl/openweb/topology/client.clj or any of the several Clojure clients about now. If you go wild you can use core.async.
I am building a web app which has two parts. In one part it uses a real time connection between the server and the client and in the other part it does some cpu intensive task to provide relevant data.
Implementing the real time communication in nodejs and the cpu intensive part in python/java. What is the best way the nodejs server can participate in a duplex communication with the other server ?
For a basic solution you can use Socket.IO if you are already using it and know how it works, it will get the job done since it allows for communication between a client and server where the client can be a different server in a different language.
If you want a more robust solution with additional options and controls or which can handle higher traffic throughput (though this shouldn't be an issue if you are ultimately just sending it through the relatively slow internet) you can look at something like ØMQ (ZeroMQ). It is a messaging queue which gives you more control and lots of different communications methods beyond just request-response.
When you set either up I would recommend using your CPU intensive server as the stable end(server) and your web server(s) as your client. Assuming that you are using a single server for your CPU intensive tasks and you are running several NodeJS server instances to take advantage of multi-cores for your web server. This simplifies your communication since you want to have a single point to connect to.
If you foresee needing multiple CPU servers you will want to setup a routing server that can route between multiple web servers and multiple CPU servers and in this case I would recommend the extra work of learning ØMQ.
You can use http.request method provided to make curl request within node's code.
http.request method is also used for implementing Authentication api.
You can put your callback in the success of request and when you get the response data in node, you can send it back to user.
While in backgrount java/python server can utilize node's request for CPU intensive task.
I maintain a node.js application that intercommunicates among 34 tasks spread across 2 servers.
In your case, for communication between the web server and the app server you might consider mqtt.
I use mqtt for this kind of communication. There are mqtt clients for most languages, including node/javascript, python and java. In my case I publish json messages using mqtt 'topics' and any task that has registered to subscribe to a 'topic' receives it's data when published. If you google "pub sub", "mqtt" and "mosquitto" you'll find lots of references and examples. Mosquitto (now an Eclipse project) is only one of a number of mqtt brokers that are available. Another very good broker that is written in Java is called hivemq.
This is a very simple, reliable solution that scales well. In my case literally millions of messages reliably pass through mqtt every day.
You must be looking for socketio
Socket.IO enables real-time bidirectional event-based communication.
It works on every platform, browser or device, focusing equally on reliability and speed.
Sockets have traditionally been the solution around which most
realtime systems are architected, providing a bi-directional
communication channel between a client and a server.
Most examples I have seen are are just small demo's and not full stack applications and they use websocket for messaging, but for any chat application there is more data then just messages...suppose like user profile, his contacts etc.
So should I use websockets for all communication between server and client or just use them for sending messages and do other things through http? If I am to use websocket for all communication how do url design of the app...since websockets don't have have any different urls like http.
You might be interested in WAMP, an officially registered WebSocket subprotocol that provides applications with WebSocket based
asynchronous, bidirectional remote procedure calls
real-time publish & subscribe notifications
Disclaimer: I am original author of WAMP and work for Tavendo.
Pretty sure you'll get the usual "it depends" answer, because, well, it depends!
If you are going to build a large application, to be used by a number of different clients in different network arrangements etc then I personally wouldn't recommend using WebSockets for everything. Why?
It's a new standard, so not all clients support it
In some network configurations WebSocket traffic may be filtered out, meaning you end up with no communications - not great
If you end up exposing an external API then HTTP is much better fitted for the job and will likely be easier to code against. There are a lot more tools out there to help you with it and styles that everyone is familiar with, like REST, to follow.
Use WebSockets when you require data being pushed from the server without the client having to poll for it, or when HTTP header overhead becomes a problem. And if you still decide to use it make sure you have a fallback mechanism (e.g. longpolling) so you don't end up with no comms.
I'm afraid I can't help you regarding WebSocket API design... given it's a new standard I don't believe the community has settled on anything just yet so you'll have to come out with your own message-based scheme.
Background: I am building a web app using NodeJS + Express. Most of the communication between client and server is REST (GET and POST) calls. I would typically use AJAX XMLHttpRequest like mentioned in https://developers.google.com/appengine/articles/rpc. And I don't seem to understand how to make my RESTful service being used for Socket.io as well.
My questions are
What scenarios should I use Socket.io over AJAX RPC?
Is there a straight forward way to make them work together. At least for Expressjs style REST.
Do I have real benefits of using socket.io(if websockets are used -- TCP layer) on non real time web applications. Like a tinyurl site (where users post queries and server responds and forgets).
Also I was thinking a tricky but nonsense idea. What if I use RESTful for requests from clients and close connection from server side and do socket.emit().
Thanks in advance.
Your primary problem is that WebSockets are not request/response oriented like HTTP is. You mention REST and HTTP interchangeably, keep in mind that REST is a methodology behind designing and modeling your HTTP routes.
Your questions,
1. Socket.io would be a good scenario when you don't require a request/response format. For instance if you were building a multiplayer game in which whoever could click on more buttons won, you would send the server each click from each user, not needing a response back from the server that it registered each click. As long as the WebSocket connection is open, you can assume the message is making it to the server. Another use case is when you need a server to contact a client sporadically. An analytics page would be a good use case for WebSockets as there is no uniform pattern as to when data needs to be at the client, it could happen at anytime.
The WebSocket connection is an HTTP GET request with a special header requesting the server to upgrade it to a WebSocket connection. Distinguishing different events and message on the WebSocket connection is up to your application logic and likely won't match REST style URIs and methods (otherwise you are replication HTTP request/reply in a sense).
No.
Not sure what you mean on the last bit.
I'll just explain more about when you want to use Socket.IO and leave the in-depth explanation to Tj there.
Generally you will choose Socket.IO when performance and/or latency is a major concern and you have a site that involves users polling for data often. AJAX or long-polling is by far easier to implement, however, it can have serious performance problems in high load situations. By high-load, I mean like Facebook. Imagine millions of people loading their feed, and every minute each user is asking the server for new data. That could require some serious hardware and software to make that work well. With Socket.IO, each user could instead connect and just indefinitely wait for new data from the server as it arrives, resulting in far less overall server traffic.
Also, if you have a real-time application, Socket.IO would allow for a much better user experience while maintaining a reasonable server load. A common example is a chat room. You really don't want to have to constantly poll the server for new messages. It would be much better for the server to broadcast new messages as they are received. Although you can do it with long-polling, it can be pretty expensive in terms of server resources.