How to properly close WebRTC peerConnection in iOS? - multithreading

I'm using 'GoogleWebRTC' pod with version '1.1.29400'. I've been facing issues in closing peer connections. Whichever thread tries to close the connection gets stuck with the following line forever.
self.peerConnection?.close()
So I chose not to close peer connection, instead, I manually destroyed the capturer, tracks, renderers, transceivers and set the reference to nil. Thought I solved the issue but I didn't.
Now started facing problems with 'RTCPeerConnectionFactory'. After generating a few peer connections from the factory, the thread which requests a new peerConnection from the factory gets stuck forever.
Here's how I initialize the factory,
static let factory: RTCPeerConnectionFactory = {
RTCInitializeSSL()
let videoEncoderFactory = RTCDefaultVideoEncoderFactory()
let videoDecoderFactory = RTCDefaultVideoDecoderFactory()
return RTCPeerConnectionFactory(encoderFactory: videoEncoderFactory, decoderFactory: videoDecoderFactory)
}()
Here's how I initialize the peer connection,
let config = RTCConfiguration()
config.iceServers = iceServers
config.sdpSemantics = .unifiedPlan
config.continualGatheringPolicy = .gatherOnce
config.iceTransportPolicy = iceTransportPolicy
let constraints = RTCMediaConstraints(mandatoryConstraints: nil, optionalConstraints: ["DtlsSrtpKeyAgreement": kRTCMediaConstraintsValueTrue])
let factory = WebRtcClient.factory
self.peerConnection = factory.peerConnection(with: config, constraints: constraints, delegate: nil)
What could've gone wrong?
Are there limitations on the number of parallel peerConnections?
Are there any restrictions on the types of threads that create/manipulate/destroy the peerConnection?
Should I set up synchronous access to these objects?

Seems I'm not alone!
I managed to solve it in 2020 itself. Was away from SO for some time, sorry for the late answer.
Though I'm not currently working on WebRTC let me recollect and answer my issue. Hope it helps someone or at least gives a direction.
I found that there was some sort of port limit in the number of open peerConnections. That's why the thread which requested a new peerConnection from the factory got stuck after a certain limit.
We have to close the peerConnections properly by using the peerConnection.close() API.
The root issue which prevented me from closing the peerConnections was that I was closing the peerConnection from one of the callbacks of peerConnection which ran on the WebRTC signalling thread. It resulted in a deadlock.
Switching to a thread other than WebRTC's to close the connection seems to be the fix.

Related

Tokio thread is not starting / spawning

I'm trying to start a new task to read from a socket client. I'm using the following same method on both the websocket server and client to receive from the connection.
The problem is, on the server side, the thread is started (2 log lines printed), but on the client side the thread is not starting (only the first line printed).
If I await on the spawn(), I can receive from the client. But then the parent task cannot proceed.
Any pointers for solving this problem?
pub async fn receive_message_from_peer(
mut receiver: PeerReceiver,
sender: Sender<IoEvent>,
peer_index: u64,
) {
debug!("starting new task for reading from peer : {:?}", peer_index);
tokio::task::spawn(async move {
debug!("new thread started for peer receiving");
// ....
}); // not awaiting or join!()
Do you use tokio::TcpListener::from_std(...) to create a listener object this way?
I had the same problem as you, my std_listener object was created based on net2. So there is a scheduling incompatibility problem.
From the description in the newer official documentation https://docs.rs/tokio/latest/tokio/net/struct.TcpListener.html#method.from_std, it seems that tokio currently has better support for socket2.
So I think the issue was I was using std::thread::sleep() in async code in some places. after using tokio::time::sleep() didn't need to yield the thread.

a request-method in java that implements long-polling

I have already written a request-method in java that sends a request to a simple Server. I have written this simple server and the Connection is based on sockets. When the server has the answer for the request, it will send it automatically to client. Now I want to write a new method that can behave as following:
if the server does not answer after a fixed period of time, then I send a new Request to the server using my request-method
My problem is to implement this idea. I am thinking in launching a thread, whenever the request-method is executed. If this thread does not hear something for fixed period of time, then the request method should be executed again. But how can I hear from the same socket used between that client and server?
I am also asking,if there is a simpler method that does not use threads
curently I am working on this idea
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer){ go to (1)}
else{
exit
}
I have some difficulties in step 3. How I can go to (1)
You may be able to accomplish this with a single thread using a SocketChannel and a Selector, see also these tutorials on SocketChannel and Selector. The gist of it is that you'll use long-polling on the Selector to let you know when your SocketChannel(s) are ready to read/write/etc using Selector#select(long timeout). (SocketChannel supports non-blocking, but from your problem description it sounds like things would be simpler using blocking)
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress("http://jenkov.com", 80));
Selector selector = Selector.open();
SelectionKey key = socketChannel.register(selector, SelectionKey.OP_READ);
// returns the number of channels ready after 5000ms; if you have
// multiple channels attached to the selector then you may prefer
// to iterate through the SelectionKeys
if(selector.select(5000) > 0) {
SocketChannel keyedChannel = (SocketChannel)key.channel();
// read/write the SocketChannel
} else {
// I think your best bet here is to close and reopen the Socket
// or to reinstantiate a new socket - depends on your Request method
}
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer) then go to (1)

sqlite returns SQLITE_BUSY in WAL mode

I have a web application working with sqlite database.
My version of sqlite is the latest from official windows binary distribution - 3.7.13.
The problem is that under heavy load on database, sqlite API functions (such as sqlite3_step) are returning SQLITE_BUSY.
I pass the following pragmas when initializing a connection:
journal_mode = WAL
page_size = 4096
synchronous = FULL
foreign_keys = on
The databas is one-file database. And I'm using Mono 2.10.8 and Mono.Data.Sqlite assembly provided with it to access database.
I'm testing it with 50 parallel threads which are sending 50 subsequent http-requests each to my application. On every request some reading and writing are done to the database. Every set of IO operations is executed inside the transaction.
Everything goes well until near 400th - 700th request. In this (random) moment API functions are starting to return SQLITE_BUSY permanently (To be more exact - until the limit of retries is reached).
As far as i know WAL mode transparently supports parallel reads and writes. I've guessed that it could be because of attempt to read database while checkpoint operation is executed. But even after turning autocheckpoint off the situation remains the same.
What could be wrong in this situation?
How to serve large amount of parallel database IO correctly?
P.S.
Only one connection per request is supposed.
I use nhibernate configured with WebSessionContext.
I initialize my NHibernate session like this:
ISession session = null;
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
session = factory.GetCurrentSession();
if (session == null)
CurrentSessionContext.Unbind(factory);
}
if (session == null)
{
session = factory.OpenSession();
CurrentSessionContext.Bind(session);
}
return session;
And on HttpApplication.EndRequest i release it like this:
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
try
{
CurrentSessionContext.Unbind(factory)
.Dispose();
}
catch (Exception ee)
{
Logr.Error("Error uninitializing session", ee);
}
}
So as far as i know there should be only one connection per request life cycle. While proceessing the request, code is executed sequentially (ASP.NET MVC 3). So it doesn't look like any concurency is possible here. Can i conclude that no connections are shared in this case?
It's not clear to me if the request threads share the same connection or not. If they don't then you should not be having these issues.
Assuming that you are indeed sharing the connection object across multiple threads, you should use some locking mechanism as the the SqliteConnection isn't thread-safe (an old post, but the SQLite library maintained as part of Mono evolved from System.Data.SQLite found on http://sqlite.phxsoftware.com).
So assuming that you don't lock around using the SqliteConnection object, can you please try it? A simple way to accomplish this could look like this:
static readonly object _locker = new object();
public void ProcessRequest()
{
lock (_locker) {
using (IDbCommand dbcmd = conn.CreateCommand()) {
string sql = "INSERT INTO foo VALUES ('bar')";
dbcmd.CommandText = sql;
dbcmd.ExecuteNonQuery();
}
}
}
You may however choose to open a distinct connection with each thread to ensure you don't have any more threading issues with the SQLite library.
EDIT
Following-up on the code you posted, do you close the session after committing the transaction? If you don't use some ITransaction, do you flush and close the session? I'm asking since I don't see it in your code, and I see it mentioned in https://stackoverflow.com/a/43567/610650
I also see it mentioned on http://nhibernate.info/doc/nh/en/index.html#session-configuration:
Also note that you may call NHibernateHelper.GetCurrentSession(); as
many times as you like, you will always get the current ISession of
this HTTP request. You have to make sure the ISession is closed after
your unit-of-work completes, either in Application_EndRequest event
handler in your application class or in a HttpModule before the HTTP
response is sent.

What is the best architecture we can use for a Netty Client Application?

I need to develop a netty based Client, that accepts messages from a Notification Server, and places these messages as Http Requests to another Server in real time.
I have already coded a working application which does this, but I need to add multi-threading to this.
At this point, I am getting confused on how to handle Netty Channels inside a multi-threaded program, as I am all loaded with the conventional approach of sockets and threads.
When I tried to separate the Netty requesting part into a method, It complains about the Channels not being closed.
Can anyone guide me how to handle this?
I would like to use ExecutionHandler and OrderedMemoryAwareThreadPoolExecutor, but I am really new into this.
Help with some examples would be a real favour at this time.
Thanks in advance.
Just add an ExecutionHandler to the ChannelPipeline. This will make sure that every ChannelUpstreamHandler which is added behind the ExecutionHandler will get executed in an extra thread and so does not block the worker-thread.
Have you looked at the example code on the Netty site? The TelnetServer looks to do what you are talking about. The factory creates new handlers whenever it gets a connection. Threads from the Executors will be used whenever there is a new connection. You could use any thread pool and executor there I suspect:
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(), << change
Executors.newCachedThreadPool())); << change
// Configure the pipeline factory.
bootstrap.setPipelineFactory(new TelnetServerPipelineFactory());
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(8080));
The TelnetServerHandler then handles the individual results.
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Cast to a String first.
// We know it is a String because we put some codec in TelnetPipelineFactory.
String request = (String) e.getMessage();
// Generate and write a response.
String response;
boolean close = false;
if (request.length() == 0) {
response = "Please type something.\r\n";
When the telnet is ready to close the connection it does this:
ChannelFuture future = e.getChannel().write(response);
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}

How do you use CouchDB Change Notifications Continuous Changes from Java?

I am trying to use the couchdb (continuous) changes API from Java and find that after exhausting the list of current changes the stream seems to be closed, not stay open forever as it is supposed to.
The code I am using is below. I would expect to never drop out of the while loop, but do as soon as the currently existing changes are finished being streamed. I am relatively new to both couchdb and Java so may be missing something obvious. Can anyone show me how to write this correctly?
URL url = new URL("[path to database here]/_changes?feed=continuous";);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setDoOutput(true);
conn.setUseCaches(false);
conn.setRequestProperty("Connection", "Keep-Alive");
conn.setRequestMethod("GET");
BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String line;
while((line = reader.readLine()) != null){
// do something with the line here
}
// Should never get here under normal circumstances
reader.close();
There's actually a default timeout of 60000ms (60 seconds) unless a different timeout value or a heartbeat is provided. I updated the _changes wiki page back in October and included any defaults I came across in the code.
Setting the heartbeat basically means that you'll be watching for a timeout in the client, i.e. no newline for the heartbeat period means you've definitely lost your connection. I believe CouchDB disables its timeout check if there's a heartbeat.
In any case, you should probably expect the connection to be closed at some point and code for that condition.
You can use &heartbeat=1000 to get couchdb sending new lines over the wire every second. That will keep your connection open until you disconnect and/or CouchDB gets shut down.
But you are right, I would also have expected that the connection won't close - seems as if also conn.setReadTimeout(0); doesn't help anything.
This is just a guess since I dont know enough about the couchdb continuousfeed implementation or the HttpUrlConnection implementation. But it seems barring any bugs in the code of the two that if your java connection client has a timeout set lower than the default heartbeat for couchdb continuous changes then the connection could get terminated by the java client.
Just a thought.

Resources