how to get current thread(worker) in rust rocket - rust

Now I am using rust rocket rocket = { version = "0.5.0-rc.1", features = ["json"] } as a web server, I am facing a problem when the request quickly, some request may turn to time out, my server side code look like this:
#[post("/v1",data = "<record>")]
pub fn word_search(record: Json<WordSearchRequest>, login_user_info: LoginUserInfo) -> content::Json<String> {
// some logic to fetch data from database
}
I was wonder why the requst turned to time out, so I want to print the server side thread and handle request time. is it possible to get current thread id in rust rocket? I am seriously doubt the server only have one thread.

I finnally found the server only have one worker from the log ouput, then I add more workers in the Rocket.toml config file fix the timeout problem.
[release]
workers = 12
log_level = "normal"
keep_alive = 5
port = 8000

Related

Tokio thread is not starting / spawning

I'm trying to start a new task to read from a socket client. I'm using the following same method on both the websocket server and client to receive from the connection.
The problem is, on the server side, the thread is started (2 log lines printed), but on the client side the thread is not starting (only the first line printed).
If I await on the spawn(), I can receive from the client. But then the parent task cannot proceed.
Any pointers for solving this problem?
pub async fn receive_message_from_peer(
mut receiver: PeerReceiver,
sender: Sender<IoEvent>,
peer_index: u64,
) {
debug!("starting new task for reading from peer : {:?}", peer_index);
tokio::task::spawn(async move {
debug!("new thread started for peer receiving");
// ....
}); // not awaiting or join!()
Do you use tokio::TcpListener::from_std(...) to create a listener object this way?
I had the same problem as you, my std_listener object was created based on net2. So there is a scheduling incompatibility problem.
From the description in the newer official documentation https://docs.rs/tokio/latest/tokio/net/struct.TcpListener.html#method.from_std, it seems that tokio currently has better support for socket2.
So I think the issue was I was using std::thread::sleep() in async code in some places. after using tokio::time::sleep() didn't need to yield the thread.

How to properly close WebRTC peerConnection in iOS?

I'm using 'GoogleWebRTC' pod with version '1.1.29400'. I've been facing issues in closing peer connections. Whichever thread tries to close the connection gets stuck with the following line forever.
self.peerConnection?.close()
So I chose not to close peer connection, instead, I manually destroyed the capturer, tracks, renderers, transceivers and set the reference to nil. Thought I solved the issue but I didn't.
Now started facing problems with 'RTCPeerConnectionFactory'. After generating a few peer connections from the factory, the thread which requests a new peerConnection from the factory gets stuck forever.
Here's how I initialize the factory,
static let factory: RTCPeerConnectionFactory = {
RTCInitializeSSL()
let videoEncoderFactory = RTCDefaultVideoEncoderFactory()
let videoDecoderFactory = RTCDefaultVideoDecoderFactory()
return RTCPeerConnectionFactory(encoderFactory: videoEncoderFactory, decoderFactory: videoDecoderFactory)
}()
Here's how I initialize the peer connection,
let config = RTCConfiguration()
config.iceServers = iceServers
config.sdpSemantics = .unifiedPlan
config.continualGatheringPolicy = .gatherOnce
config.iceTransportPolicy = iceTransportPolicy
let constraints = RTCMediaConstraints(mandatoryConstraints: nil, optionalConstraints: ["DtlsSrtpKeyAgreement": kRTCMediaConstraintsValueTrue])
let factory = WebRtcClient.factory
self.peerConnection = factory.peerConnection(with: config, constraints: constraints, delegate: nil)
What could've gone wrong?
Are there limitations on the number of parallel peerConnections?
Are there any restrictions on the types of threads that create/manipulate/destroy the peerConnection?
Should I set up synchronous access to these objects?
Seems I'm not alone!
I managed to solve it in 2020 itself. Was away from SO for some time, sorry for the late answer.
Though I'm not currently working on WebRTC let me recollect and answer my issue. Hope it helps someone or at least gives a direction.
I found that there was some sort of port limit in the number of open peerConnections. That's why the thread which requested a new peerConnection from the factory got stuck after a certain limit.
We have to close the peerConnections properly by using the peerConnection.close() API.
The root issue which prevented me from closing the peerConnections was that I was closing the peerConnection from one of the callbacks of peerConnection which ran on the WebRTC signalling thread. It resulted in a deadlock.
Switching to a thread other than WebRTC's to close the connection seems to be the fix.

Intermittent network timeouts while trying to fetch data in Node application

We have a NextJS app with an Express server.
The problem we're seeing is lots of network timeouts to the API we are calling (the underlying exception says "socket hangup"). However, that API does not show any errors or a slow response time. It's as if the API calls aren't even making it all the way to the API.
Theories and things we've tried:
Blocked event loop: we tried replacing synchronous logging with asynchronous "winston" framework, to make sure we're not blocking the event loop. Not sure what else could be blocking
High CPU: the CPU can spike up to 60% sometimes. We're trying to minimize that spike by taking out some regexes we were using (since we heard those are expensive, CPU-wise).
Something about how big the JSON response is from the API? We're passing around a lot of data…
Too many complex routes in our Express routing structure: We minimized the number of routes by combining some together (which results in more complicated regexes in the route definitions)…
Any ideas why we would be seeing these fetch timeouts? They only appear during load tests and in production environments, but they can bring down the whole app with heavy load.
The code that emits the error:
function socketCloseListener() {
const socket = this;
const req = socket._httpMessage;
debug('HTTP socket close');
// Pull through final chunk, if anything is buffered.
// the ondata function will handle it properly, and this
// is a no-op if no final chunk remains.
socket.read();
// NOTE: It's important to get parser here, because it could be freed by
// the `socketOnData`.
const parser = socket.parser;
const res = req.res;
if (res) {
// Socket closed before we emitted 'end' below.
if (!res.complete) {
res.aborted = true;
res.emit('aborted');
}
req.emit('close');
if (res.readable) {
res.on('end', function() {
this.emit('close');
});
res.push(null);
} else {
res.emit('close');
}
} else {
if (!req.socket._hadError) {
// This socket error fired before we started to
// receive a response. The error needs to
// fire on the request.
req.socket._hadError = true;
req.emit('error', connResetException('socket hang up'));
}
req.emit('close');
The message is generated when the server does not send and response.
That's that easy bit.
But why would the API server not send a response?
Well, without seeing the minimum code that repro this I can only give you some pointers.
This issue here discusses at length the changes between version 6 and 8, in particular how a GET with a body now can cause it. This change of behaviour is more aligned to the REST specs.

a request-method in java that implements long-polling

I have already written a request-method in java that sends a request to a simple Server. I have written this simple server and the Connection is based on sockets. When the server has the answer for the request, it will send it automatically to client. Now I want to write a new method that can behave as following:
if the server does not answer after a fixed period of time, then I send a new Request to the server using my request-method
My problem is to implement this idea. I am thinking in launching a thread, whenever the request-method is executed. If this thread does not hear something for fixed period of time, then the request method should be executed again. But how can I hear from the same socket used between that client and server?
I am also asking,if there is a simpler method that does not use threads
curently I am working on this idea
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer){ go to (1)}
else{
exit
}
I have some difficulties in step 3. How I can go to (1)
You may be able to accomplish this with a single thread using a SocketChannel and a Selector, see also these tutorials on SocketChannel and Selector. The gist of it is that you'll use long-polling on the Selector to let you know when your SocketChannel(s) are ready to read/write/etc using Selector#select(long timeout). (SocketChannel supports non-blocking, but from your problem description it sounds like things would be simpler using blocking)
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress("http://jenkov.com", 80));
Selector selector = Selector.open();
SelectionKey key = socketChannel.register(selector, SelectionKey.OP_READ);
// returns the number of channels ready after 5000ms; if you have
// multiple channels attached to the selector then you may prefer
// to iterate through the SelectionKeys
if(selector.select(5000) > 0) {
SocketChannel keyedChannel = (SocketChannel)key.channel();
// read/write the SocketChannel
} else {
// I think your best bet here is to close and reopen the Socket
// or to reinstantiate a new socket - depends on your Request method
}
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer) then go to (1)

What is the best architecture we can use for a Netty Client Application?

I need to develop a netty based Client, that accepts messages from a Notification Server, and places these messages as Http Requests to another Server in real time.
I have already coded a working application which does this, but I need to add multi-threading to this.
At this point, I am getting confused on how to handle Netty Channels inside a multi-threaded program, as I am all loaded with the conventional approach of sockets and threads.
When I tried to separate the Netty requesting part into a method, It complains about the Channels not being closed.
Can anyone guide me how to handle this?
I would like to use ExecutionHandler and OrderedMemoryAwareThreadPoolExecutor, but I am really new into this.
Help with some examples would be a real favour at this time.
Thanks in advance.
Just add an ExecutionHandler to the ChannelPipeline. This will make sure that every ChannelUpstreamHandler which is added behind the ExecutionHandler will get executed in an extra thread and so does not block the worker-thread.
Have you looked at the example code on the Netty site? The TelnetServer looks to do what you are talking about. The factory creates new handlers whenever it gets a connection. Threads from the Executors will be used whenever there is a new connection. You could use any thread pool and executor there I suspect:
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(), << change
Executors.newCachedThreadPool())); << change
// Configure the pipeline factory.
bootstrap.setPipelineFactory(new TelnetServerPipelineFactory());
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(8080));
The TelnetServerHandler then handles the individual results.
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Cast to a String first.
// We know it is a String because we put some codec in TelnetPipelineFactory.
String request = (String) e.getMessage();
// Generate and write a response.
String response;
boolean close = false;
if (request.length() == 0) {
response = "Please type something.\r\n";
When the telnet is ready to close the connection it does this:
ChannelFuture future = e.getChannel().write(response);
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}

Resources