cx_Oracle: Connection keep alive - cx-oracle

I have a program which uses cx_Oracle and receives a really large number of logs in chunks, and for each chunk does all kinds of calculations before getting the next chunk.
One of the calculations can sometimes be very long (nothing I can do with it, data is sent to a remote server), and then I might lose the connection.
How can I keep the connection alive without messing up the fetching?
Thanks!

Have the same here, a firewall terminates the connection towards the client (not the database). I tried a thread that executes "executeSQL('select * from dual')" in intervals, but when the calculation in the main thread starts, the executeSQL() isn't executed until the calculation has finished.

Related

socket.io | losing messages due to frequency and volume

I have around 5700 messages (each message is a 100x100 image as a Base64 string) which I emit from the server to the client from within a for-loop, pretty fast:
[a pretty big array].forEach((imgAsBase64) => {
io.emit('newImgFromServer', imgAsBase64)
})
The client only receives from 1700 to 3000 of them in total, before I get a:
disconnected due to = transport error
socket connected
Once the socket re-connects (and the for-loop has not ended) the emission of new messages from within the loop resumes but I have lost those previous ones forever.
How can I make sure that the client receives all of the messages every time ?
This question is an interesting example of "starving the event loop". If you're in a tight for loop for some period of time with no await in the loop, then you don't let the event loop process any other events during the duration of the for loop. If some events need to be processed during that time for things to work properly, you get problems. Read on for how that applies to this case.
Both client and server need some occasional cycles to process housekeeping pings and pongs in the socket.io protocol. If you firehose messages from one end to the other in a non-stop for loop, you can starve the ability to process those housekeeping messages and it will think that it has timed out (not received the housekeeping messages when it should have which is usually a sign of a lost or inoperative connection). In reality, the housekeeping messages are sitting in the event loop waiting to be processed, but if you never give the event loop a chance to process them, some other code running in the for loop will think that they never arrived.
So, you have to make sure you give both ends enough occasional cycles to process those housekeeping messages. The typical way to do that is to just make sure that you aren't fire hosing messages. Send N messages, then pause for a short period of time (enough time for the event loop to be able to service any incoming network events). Then send N more, pause, etc...
In addition, you could make this whole process a lot more efficient by combining a number of the Base64 strings into a single message. You can probably just put them into an array of 100 of them and send that array of 100 and repeat until they are all sent. Then, obviously change the client to expect an array of Base64 strings instead of just a single one. This will obviously result in a lot fewer messages to send (which is more efficient), but you will still need to pause every so often to let the server process things in the event loop.
Exactly how many messages to send before pausing is something that could be figured out via trial and error, but if you put 100 images into a single message and send 10 of these larger messages (which sends 1,000 images) and then pause for even just 50ms, that should be enough time for the event loop to service any inbound ack messages from socket.io to avoid the timeout. Any sort of pause using setTimeout() makes the setTimeout() get in line behind most other messages that are waiting in the event loop so even a short pause with setTimeout() tends to accomplish the goal of letting the event loop process the things that were waiting to be run.
If end-to-end time was super important, you could experiment with sending more messages at once and/or changing the pause time, but you don't want to end with a setting that is close to where you get a timeout (you want some safety factor).

Querying multiple sensors regularly using NodeJS

I need to fetch the values of about 200 sensors every 15 seconds or so. To fetch the values I simply need to make an HTTP call with basic authentication and parse the response. The catch is that these sensors might be on slow connection so I need to wait at least 5 seconds for one sensor (but usually they respond a lot quicker, but there's always some that are slow and timeout).
So right now I have the following setup for that:
There is a NodeJS process that is connected to my DB and knows all about the sensors. It checks regularly to see if there are new ones or there are some that got deleted. It spawns a child process for every sensor, and in case the child process dies it restarts it. Also it kills it if the sensor gets deleted. The child process makes the HTTP call to its sensor with a 5 second timeout value and if it receives the value, saves it to Redis. Also it is in an infinite loop with a 15 seconds setTimeout. And there is a third process that copies all the values from Redis to the main MySQL DB.
So that has been a working solution for half a year, but after a major system upgrade (from Ubuntu 14.04 to 18.04 and thus every package upgraded as well) it seems to leak some memory and I can't seem to figure out where.
After starting out, the processes summarised take about 1.5GB of memory. But after a day or so this goes up to 3GB and so on and before running out of memory I need to kill all node processes and restart the whole thing.
So now I am trying to figure out more efficient methods to achieve the same result (query around 2-300 URLs every 15 sec and store the result in MySQL). At the moment I'm thinking of ditching Redis and the child processes will communicate with their master process and the master process will write to MySQL directly. This way I don't need to load the Redis library into every child process and that might save me some time.
So I need ideas on how to reduce memory usage for that application (I'm limited to PHP and NodeJS, mainly because of my knowledge, so writing a native daemon might be out of the question)
Thanks!
The solution was easier than I thought. I had to rewrite the child process into a native bash script and that brought down the memory usage to almost being zero.

VB6 handling Multiple connections (Multi-Threading)

I am wondering what is the best stable way to handle multiple connections at the same time?
I am using vb6 and currently using winsock api's no Winsock control. I tried that before and its not multi threaded too.
At the moment it's only a single thread which is not efficient when the thread is busy sending data the other connector is delayed. Until the thread be free.
I am using WSAAsyncSelect non-blocking socket.
So since VB6 isn't stable at multithreading. I am thinking of using an ASM
DLL and then call it from vb6 that will handle the connections. But what is the best way? create a thread for each connection then terminate the thread after recv? or keep the connection open all time until the other part closes it?
Because the server running the client is not that good specifications. So more threads consumes more resources.
i have not much knowledge about what is better in performance so please share your opinions.
Also how to be sure that all data have been sent from send function on a non-blocking sockets?
should loop through send and count each time how many bytes sent? or just call it once? i have noticed if i send large data that can not be processed at 1 time the window that i specified at call to WSAAsyncSelect to handle the network events gets called again so there is more data to be sent but how to be sure that this is belongs to this partial send? or recv?
Note: Max connections can be connected at same time is about 100.
Here is an example of problem i am having while sending a pic over network size (5 kb)
sometimes it is all received with 1 recv call while sometimes its being split into pieces
If Bytes = PicSize Then
MsgBox "All data are sent 1 time"
Else
MsgBox "there is more data left"
While Bytes <> PicSize
bytesRecieved = recv(s, Buffer(Bytes), UBound(Buffer), 0)
If bytesRecieved > 0 Then
Bytes = Bytes + bytesRecieved
End If
DoEvents
Wend
End If
The return value of recv is always WSAEWOULDBLOCK so i am getting inside an infinite loop.
Any suggestions?
You've ask more than one question, which makes it hard to answer. Whether using async winsock directly or using the WinSock control, it is important to realize that when you think you are "busy sending data" all you are doing is passing the data to the network stack. This happens quickly and your code continues on. That data will, hopefully, eventually, make it to the destination. This part does not happen as quickly, but your code has moved on to process the next task.

ZMQ socket queue

I'm pretty new with ZMQ and I'm working with the NodeJS binding. I have an application that uses PUSH/PULL sockets. On one side I PUSH data to some nodes that through the PULL socket receive and process it. Sometimes I have to kill one or more nodes of my application, and it can happen that these nodes still have some data in the PULL socket to be processed. I don't want to lose this data, so I was wondering if there is a way to access ZMQ's PULL socket queue to check if there are still messages to be processed.
I actually couldn't find anything in the specs of ZMQ and the NodeJS binding, so maybe I'm getting the whole concept wrong.
If you kill a process then any data in that processes buffers will be lost.
Instead of killing the process forcefully, you should always find a way to allow processes to shut-down gracefully. Here, you can send a "KILL" message to the PULL socket; the process can then read that and exit when it receives it. If you can flush the socket buffer (depends if there are other processes still sending to it), you can do that and then exit when there are no more messages to read.
I'm posting the solution I found. It's not really a solution as I'm not using the ZMQ socket to check that there are no more messages in the queue, it's just a workaround/hack that came to my mind to make the thing work. I don't have time to write the queue handling by myself, so here's how I solved the problem:
Whenever the processes receive messages to process, they store a timestamp through new Date().getTime(). Whenever a process needs to be killed a kill message is sent to it. As the process receives the message, it starts a timeout with setInterval. Every x seconds (I put 10, can be more or less) the timeout fires a function that checks if the last received message is old enough (takes a timestamp, subtract this ts with the last one saved and if the result is greater that y, which in my case is 100 seconds, it is old enough). If it is, it means no more messages have been received (no more messages in the queue) so it kills the process, otherwise does nothing.

Show a splash screen while a database connection (that might take a long time) runs

I am trying to show a splash screen, and not freeze up the application, while it connects to a database. Normal connections (to MSSQL via ADO) take about 300 msec, and this does not cause the main thread to show "not responding" on windows.
However in the case of (a) a network error or (b) a configuration mistake (invalid SQL server hostname/instance), it takes 60 seconds to time out. This not only makes the application non-responsive, but it's almost impossible to show any error or message when it's going to freeze. I could pop up a message before I start the connection but there's really no solution when the main thread blocks for 60 seconds.
The solution seems to be to move the connection to a background thread. This lead to the following code:
a TThread-class that makes the background connection and some SyncObj like a TEvent used to send a signal back to the main thread.
A loop in the main thread with this code:
BackgroundThread.StartConnecting;
while not BackgroundThread.IsEventSignalled do begin
Application.ProcessMessages; // keep message pump alive.
end;
// continue startup (reports error if db connection failed)
Is this the right way to go? My hesitations involve the following elements of the above solution:
A. I would be calling Application.ProcessMessages, which I consider an extreme code smell.(This might be a permissible exception to this rule)
B. I'm introducing threads into the startup of an application, and I'm worried about introducing bugs.
If anyone has a reference implementation that is known to be free of race conditions, that can do a background connection to ADO, and is known to be a safe approach, that would be really helpful. Otherwise general tips or partial examples are good.
Because of the known limitation that every thread must use it's own ADO connection (ie you cannot use a connection object created in other thread), the only option I can think of is create a thread that will make a connection to database, and, after the connection is established or timeout is reached, signal the main thread about the event and close/destroy the connection. The main thread can in the meantime display splash or progress, waiting for a message from that thread. So, you eliminate the case with wrong password or unreachable host. There is a reasonable assumption, that, if the second thread could connect, the main thread would be able to connect right after.
There are several methods for inter-thread communication. I usually use Window's messages. First I define custome message and message handler in a form. Then when I need to inform main
thread from custom thread, I use PostMessage function to send notification.
Small tutorial here.
You can also use library for threading, eg OmniThreadLibrary.

Resources