How do you use CouchDB Change Notifications Continuous Changes from Java? - couchdb

I am trying to use the couchdb (continuous) changes API from Java and find that after exhausting the list of current changes the stream seems to be closed, not stay open forever as it is supposed to.
The code I am using is below. I would expect to never drop out of the while loop, but do as soon as the currently existing changes are finished being streamed. I am relatively new to both couchdb and Java so may be missing something obvious. Can anyone show me how to write this correctly?
URL url = new URL("[path to database here]/_changes?feed=continuous";);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setDoOutput(true);
conn.setUseCaches(false);
conn.setRequestProperty("Connection", "Keep-Alive");
conn.setRequestMethod("GET");
BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String line;
while((line = reader.readLine()) != null){
// do something with the line here
}
// Should never get here under normal circumstances
reader.close();

There's actually a default timeout of 60000ms (60 seconds) unless a different timeout value or a heartbeat is provided. I updated the _changes wiki page back in October and included any defaults I came across in the code.
Setting the heartbeat basically means that you'll be watching for a timeout in the client, i.e. no newline for the heartbeat period means you've definitely lost your connection. I believe CouchDB disables its timeout check if there's a heartbeat.
In any case, you should probably expect the connection to be closed at some point and code for that condition.

You can use &heartbeat=1000 to get couchdb sending new lines over the wire every second. That will keep your connection open until you disconnect and/or CouchDB gets shut down.
But you are right, I would also have expected that the connection won't close - seems as if also conn.setReadTimeout(0); doesn't help anything.

This is just a guess since I dont know enough about the couchdb continuousfeed implementation or the HttpUrlConnection implementation. But it seems barring any bugs in the code of the two that if your java connection client has a timeout set lower than the default heartbeat for couchdb continuous changes then the connection could get terminated by the java client.
Just a thought.

Related

How to properly close WebRTC peerConnection in iOS?

I'm using 'GoogleWebRTC' pod with version '1.1.29400'. I've been facing issues in closing peer connections. Whichever thread tries to close the connection gets stuck with the following line forever.
self.peerConnection?.close()
So I chose not to close peer connection, instead, I manually destroyed the capturer, tracks, renderers, transceivers and set the reference to nil. Thought I solved the issue but I didn't.
Now started facing problems with 'RTCPeerConnectionFactory'. After generating a few peer connections from the factory, the thread which requests a new peerConnection from the factory gets stuck forever.
Here's how I initialize the factory,
static let factory: RTCPeerConnectionFactory = {
RTCInitializeSSL()
let videoEncoderFactory = RTCDefaultVideoEncoderFactory()
let videoDecoderFactory = RTCDefaultVideoDecoderFactory()
return RTCPeerConnectionFactory(encoderFactory: videoEncoderFactory, decoderFactory: videoDecoderFactory)
}()
Here's how I initialize the peer connection,
let config = RTCConfiguration()
config.iceServers = iceServers
config.sdpSemantics = .unifiedPlan
config.continualGatheringPolicy = .gatherOnce
config.iceTransportPolicy = iceTransportPolicy
let constraints = RTCMediaConstraints(mandatoryConstraints: nil, optionalConstraints: ["DtlsSrtpKeyAgreement": kRTCMediaConstraintsValueTrue])
let factory = WebRtcClient.factory
self.peerConnection = factory.peerConnection(with: config, constraints: constraints, delegate: nil)
What could've gone wrong?
Are there limitations on the number of parallel peerConnections?
Are there any restrictions on the types of threads that create/manipulate/destroy the peerConnection?
Should I set up synchronous access to these objects?
Seems I'm not alone!
I managed to solve it in 2020 itself. Was away from SO for some time, sorry for the late answer.
Though I'm not currently working on WebRTC let me recollect and answer my issue. Hope it helps someone or at least gives a direction.
I found that there was some sort of port limit in the number of open peerConnections. That's why the thread which requested a new peerConnection from the factory got stuck after a certain limit.
We have to close the peerConnections properly by using the peerConnection.close() API.
The root issue which prevented me from closing the peerConnections was that I was closing the peerConnection from one of the callbacks of peerConnection which ran on the WebRTC signalling thread. It resulted in a deadlock.
Switching to a thread other than WebRTC's to close the connection seems to be the fix.

Firebase onDisconnect() firing multiple times

Building an app with presence following the firebase docs, is there a scenario where the on-disconnect fires when the app is still connected? We see instances where the presence node shows the app as going offline and then back online within a few seconds when we aren't losing a network connection.
We are seeing on multiple embedded devices installed in the field where presence is set to false and then almost immediately right back to true and it's occurring on all the devices within a few seconds of each other. From the testing we have done and the docs online we know that if we lose internet connection on the device it takes roughly 60 seconds before the timeout on the server fires the onDisconnect() method.
We have since added code in the presence method that allows the device if it sees the presence node be set to false while the app is actually running it will reset the presence back to true. At times when this happens we get a single write back to true and that is the end of it, other times it is like the server and client are fighting each other and the node is reset to true numerous times over the course of 50-200 milliseconds. We monitor this by pushing to another node within the device GUID each time we are forcing presence back to true. This only occurs while the module is running and after it initially establishes presence.
Here is the method that we call from our various modules that are running on the device so that we can monitor the status of each of the modules at any given time.
exports.online = function (program, currentProgram) {
var programPath = process.env.FIREBASE_DEVICES + process.env.GUID + '/status/' + program
var onlinePath = process.env.FIREBASE_DEVICES + process.env.GUID + '/statusOnlineTimes/' + program
var programRef = new firebase(programPath);
var statusRef = new firebase(process.env.FIREBASE_DEVICES + process.env.GUID + '/status/bootup');
var onlineRef = new firebase(onlinePath)
amOnline.on('value', function(snapshot) {
if (snapshot.val()) {
programRef.onDisconnect().set(false);
programRef.set(true);
programRef.on('value', function(snapshot){
if (snapshot.val() == false){
programRef.set(true);
console.log('[NOTICE] Resetting', program, 'module status back to True after Fireabase set to False')
var objectToPush = {
program: program,
time: new Date().toJSON()
}
onlineRef.push(objectToPush)
}
})
if (currentProgram != undefined) {
statusRef.onDisconnect().set('Offline')
statusRef.set(currentProgram)
}
}
});
The question we have is there ever an instance where Firebase is calling the onDisconnect() method even though it really isn't losing its status? We had instances where we would see the device go offline and then back online within 60 seconds before we added the reset code. The reset code was to combat another issue we had in the field where if the power were interrupted to the device and it did not make a clean exit, the device could reboot and and reset the presence with a new UID before the timeout for the prior instance had fired. Then once the timeout fired the device would show as offline even though it was actually online.
So we were able to stop the multiple pushes that were happening when the device reconnected by adding a programRef.off() call directly before the programRef.on(...) call. What we determined to be happening is that anytime the device went online from an offline state and the amOnline.on(...) callback fired it created a new listener.
Now we are able to handle the case where a onDisconnect() fires from a earlier program PID and overwrites the currently active program with a status of offline. This seems to solve the issue we are having with the race condition of the devices in the field able to reboot and regain connection prior to the onDisconnect() firing for the instance that was not cleanly exited.
We are still having an issue where all of the devices are going off and then back online at approximately the same time (within 1-3 seconds of each other). Are there any times where Firebase resets the ./info/connected node? Because we are monitoring presence and actually logging on and off events maybe we are just catching an event that most people don't see? Or is there something that we are doing wrong?

a request-method in java that implements long-polling

I have already written a request-method in java that sends a request to a simple Server. I have written this simple server and the Connection is based on sockets. When the server has the answer for the request, it will send it automatically to client. Now I want to write a new method that can behave as following:
if the server does not answer after a fixed period of time, then I send a new Request to the server using my request-method
My problem is to implement this idea. I am thinking in launching a thread, whenever the request-method is executed. If this thread does not hear something for fixed period of time, then the request method should be executed again. But how can I hear from the same socket used between that client and server?
I am also asking,if there is a simpler method that does not use threads
curently I am working on this idea
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer){ go to (1)}
else{
exit
}
I have some difficulties in step 3. How I can go to (1)
You may be able to accomplish this with a single thread using a SocketChannel and a Selector, see also these tutorials on SocketChannel and Selector. The gist of it is that you'll use long-polling on the Selector to let you know when your SocketChannel(s) are ready to read/write/etc using Selector#select(long timeout). (SocketChannel supports non-blocking, but from your problem description it sounds like things would be simpler using blocking)
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress("http://jenkov.com", 80));
Selector selector = Selector.open();
SelectionKey key = socketChannel.register(selector, SelectionKey.OP_READ);
// returns the number of channels ready after 5000ms; if you have
// multiple channels attached to the selector then you may prefer
// to iterate through the SelectionKeys
if(selector.select(5000) > 0) {
SocketChannel keyedChannel = (SocketChannel)key.channel();
// read/write the SocketChannel
} else {
// I think your best bet here is to close and reopen the Socket
// or to reinstantiate a new socket - depends on your Request method
}
I am working on this idea:
1)send a request using my request-method
2)launch a thread for hearing from socket
3)If(no answer) then go to (1)

sqlite returns SQLITE_BUSY in WAL mode

I have a web application working with sqlite database.
My version of sqlite is the latest from official windows binary distribution - 3.7.13.
The problem is that under heavy load on database, sqlite API functions (such as sqlite3_step) are returning SQLITE_BUSY.
I pass the following pragmas when initializing a connection:
journal_mode = WAL
page_size = 4096
synchronous = FULL
foreign_keys = on
The databas is one-file database. And I'm using Mono 2.10.8 and Mono.Data.Sqlite assembly provided with it to access database.
I'm testing it with 50 parallel threads which are sending 50 subsequent http-requests each to my application. On every request some reading and writing are done to the database. Every set of IO operations is executed inside the transaction.
Everything goes well until near 400th - 700th request. In this (random) moment API functions are starting to return SQLITE_BUSY permanently (To be more exact - until the limit of retries is reached).
As far as i know WAL mode transparently supports parallel reads and writes. I've guessed that it could be because of attempt to read database while checkpoint operation is executed. But even after turning autocheckpoint off the situation remains the same.
What could be wrong in this situation?
How to serve large amount of parallel database IO correctly?
P.S.
Only one connection per request is supposed.
I use nhibernate configured with WebSessionContext.
I initialize my NHibernate session like this:
ISession session = null;
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
session = factory.GetCurrentSession();
if (session == null)
CurrentSessionContext.Unbind(factory);
}
if (session == null)
{
session = factory.OpenSession();
CurrentSessionContext.Bind(session);
}
return session;
And on HttpApplication.EndRequest i release it like this:
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
try
{
CurrentSessionContext.Unbind(factory)
.Dispose();
}
catch (Exception ee)
{
Logr.Error("Error uninitializing session", ee);
}
}
So as far as i know there should be only one connection per request life cycle. While proceessing the request, code is executed sequentially (ASP.NET MVC 3). So it doesn't look like any concurency is possible here. Can i conclude that no connections are shared in this case?
It's not clear to me if the request threads share the same connection or not. If they don't then you should not be having these issues.
Assuming that you are indeed sharing the connection object across multiple threads, you should use some locking mechanism as the the SqliteConnection isn't thread-safe (an old post, but the SQLite library maintained as part of Mono evolved from System.Data.SQLite found on http://sqlite.phxsoftware.com).
So assuming that you don't lock around using the SqliteConnection object, can you please try it? A simple way to accomplish this could look like this:
static readonly object _locker = new object();
public void ProcessRequest()
{
lock (_locker) {
using (IDbCommand dbcmd = conn.CreateCommand()) {
string sql = "INSERT INTO foo VALUES ('bar')";
dbcmd.CommandText = sql;
dbcmd.ExecuteNonQuery();
}
}
}
You may however choose to open a distinct connection with each thread to ensure you don't have any more threading issues with the SQLite library.
EDIT
Following-up on the code you posted, do you close the session after committing the transaction? If you don't use some ITransaction, do you flush and close the session? I'm asking since I don't see it in your code, and I see it mentioned in https://stackoverflow.com/a/43567/610650
I also see it mentioned on http://nhibernate.info/doc/nh/en/index.html#session-configuration:
Also note that you may call NHibernateHelper.GetCurrentSession(); as
many times as you like, you will always get the current ISession of
this HTTP request. You have to make sure the ISession is closed after
your unit-of-work completes, either in Application_EndRequest event
handler in your application class or in a HttpModule before the HTTP
response is sent.

What is the best architecture we can use for a Netty Client Application?

I need to develop a netty based Client, that accepts messages from a Notification Server, and places these messages as Http Requests to another Server in real time.
I have already coded a working application which does this, but I need to add multi-threading to this.
At this point, I am getting confused on how to handle Netty Channels inside a multi-threaded program, as I am all loaded with the conventional approach of sockets and threads.
When I tried to separate the Netty requesting part into a method, It complains about the Channels not being closed.
Can anyone guide me how to handle this?
I would like to use ExecutionHandler and OrderedMemoryAwareThreadPoolExecutor, but I am really new into this.
Help with some examples would be a real favour at this time.
Thanks in advance.
Just add an ExecutionHandler to the ChannelPipeline. This will make sure that every ChannelUpstreamHandler which is added behind the ExecutionHandler will get executed in an extra thread and so does not block the worker-thread.
Have you looked at the example code on the Netty site? The TelnetServer looks to do what you are talking about. The factory creates new handlers whenever it gets a connection. Threads from the Executors will be used whenever there is a new connection. You could use any thread pool and executor there I suspect:
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(), << change
Executors.newCachedThreadPool())); << change
// Configure the pipeline factory.
bootstrap.setPipelineFactory(new TelnetServerPipelineFactory());
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(8080));
The TelnetServerHandler then handles the individual results.
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Cast to a String first.
// We know it is a String because we put some codec in TelnetPipelineFactory.
String request = (String) e.getMessage();
// Generate and write a response.
String response;
boolean close = false;
if (request.length() == 0) {
response = "Please type something.\r\n";
When the telnet is ready to close the connection it does this:
ChannelFuture future = e.getChannel().write(response);
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}

Resources