Is there any hook for finishing the DHCPACK in isc-dhcp-server - hook

I want to run a script in the server after the isc-dhcp-server send the DHCPACK packet to client.
The only solution I get is running a script to retrieve the dhcp log, but it is not efficient.
Any other ways?

thks, I found that
REFERENCE: EVENTS
There are three kinds of events that can happen regarding a lease, and it is possible to declare statements that occur when any of these events happen. These events are the commit event, when the server has made a commitment of a certain lease to a client, the release event, when the client has released the server from its commitment, and the expiry event, when the commitment expires.
To declare a set of statements to execute when an event happens, you must use the on statement, followed by the name of the event, followed by a series of statements to execute when the event happens, enclosed in braces.
the link is:
https://www.isc.org/wp-content/uploads/2018/02/dhcp44.html#REFERENCE:%20EVENTS
and the example is:
https://jpmens.net/2011/07/06/execute-a-script-when-isc-dhcp-hands-out-a-new-lease/

Related

Is there a way to overload Node.js event loop using websoket

I'm having issues with Node.js and the "WS" implementation of websocket (https://www.npmjs.com/package/ws). After a surge (plenty of messages in a short window of time), I'm having data that suggests that I've "missed" a message.
I've contacted the owner of the emitter server and he assures me that all messages have been sent on his side.
I've logger every message received on my side (at the start of the function on('message', () => {}), and I can't seem to find the message, so my assumption is that it doesn't even reached this point
So I'm wondering:
Messages are reveived and treated in a FIFO order. During the treatment of the current message, new ones will be stacked in the node event loop to be computed immediatly after. Correct ? Is there a way for that event loop to be "too big" that may drop new incomming messages ? If so, does it drop it quietly ? or does the program crashes vigorously (in other words, how can I see if a message has been dropped this way ?)
Does the 'ws' module have any kind of kown limitations for a maximum number of message received ? Does it have an internal way of dropping messages ?
Is there a better alternative than the 'ws' module ?
Is there any other ways to explain a "missed" message ?
Thanks a lot for your insights,
I use ws in nodejs to handle large message flows from many clients simultaneously in production, and I have never had it lose messages. Each server handles several thousand messages each second from hundreds of different client connections. The way my system works, if ws dropped messages or changed their order, my users would complain loudly.
That makes me guess you are not hitting any limitation of ws.
Early in my programming work I had the not-so-bright idea of putting incoming messages in queue objects in my nodejs code and processing them "later." That led to a hideously confusing message flow through my server. It sometimes looked like I had lost ws messages. I was happy to delete all that code, and dispatch every message completely within its message event handler.
Websocket connections sometimes close abnormally. Because network. You can catch those situations with error and close event handlers. It can take a while for the sender of a message, or the receiver, to detect that a network fault of some kind disrupted its connection. That can lead to disagreement about message count between sender and receiver. It's worth investigating.
I adorn ws's connection objects with message counts ("adorn" -- add an application-specific property to an object) and put those message counts into the log when a connection closes.

What can cause "idle in transaction" for "BEGIN" statements

We have a node.js application that connects via pg-promise to a Postgres 11 server - all processes are running on a single cloud server in docker containers.
Sometimes we hit a situation where the application does not react anymore.
The last time this happened, I had a little time to check the db via pgadmin and it showed that the connections were idle in transaction with statement BEGIN and an exclusive lock of virtualxid
I think the situation is like this:
the application has started a transaction by sending the BEGIN sql command to the db
the db got this command and started a new transaction and thus acquired an exclusive lock of mode virtualxid
now the db waits for the application to send the next statement/s (until it receives COMMIT or ROLLBACK) - and then it will release the exclusive lock of mode virtualxid
but for some reason it does not get anymore statements:
I think that the node.js event-loop is blocked - because at the time, when we see these locks, the node.js application does not log anymore statements. But the webserver still gets requests and reported some upstream timed out requests.
Does this make sense (I'm really not sure about 2. and 3.)?
Why would all transactions block at the beginning? Is this just coincidence or is the displayed SQL maybe wrong?
BTW: In this answer I found, that we can set idle_in_transaction_session_timeout so that these transactions will be released after a timeout - which is great, but I try to understand what's causing this issue.
The transactions are not blocking at all. The database is waiting for the application to send the next statement.
The lock on the transaction ID is just a technique for transactions to block each other, even if they are not contending for a table lock (for example, if they are waiting for a row lock): each transaction holds an exclusive lock on its own transaction ID, and if it has to wait for a concurrent transaction to complete, it can just request a lock on that transaction's ID (and be blocked).
If all transactions look like this, then the lock must be somewhere in your application; the database is not involved.
When looking for processes blocked in the database, look for rows in pg_locks where granted is false.
Your interpretation is correct. As for why it is happening, that is hard to say. It seems like there is some kind of bug (maybe an undetected deadlock) in your application, or maybe in nodes.js or pg-promise. You will have to debug at that level.
As expected the problems were caused by our application code. Transactions were used incorrectly:
One of the REST endpoints started a new transaction right away, using Database.tx().
This transaction was passed down multiple levels, but one function in the chain had an error and passed undefined instead of the transaction to the next level
the lowest repository level function started a new transaction (because the transaction parameter was undefined), by using Database.tx() a second time
This started to fail, under heavy load:
The connection pool size was set to 10
When there were many simultaneous requests for this endpoint, we had a situation where 10 of the requests started (opened the outer transaction) and had not yet reached the repository code that will request the 2nd transaction.
When these requests reached the repository code, they request a new (2nd) connection from the connection-pool. But this call will block because there are currently all connections in use.
So we have a nasty application level deadlock
So the solution was to fix the application code (the intermediate function must pass down the transaction correctly). Then everything works.
Moreover I strongly recommend to set a sensible idle_in_transaction_session_timeout and connection-timeout. Then, even if such an application-deadlock is introduced again in future versions, the application can recover automatically after this timeout.
Notes:
pg-postgres before v 10.3.4 contained a small bug #682 related to the connection-timeout
pg-promise before version 10.3.5 could not reocver from an idle-in-transaction-timeout and left the connection in a broken state: see pg-promise #680
Basically there was another issue: there was no need to use a transaction - because all functions were just reading data: so we can just use Database.task() instead of Database.tx()

Asana API Sync Error

I currently have a application running that passes data between Asana and Zendesk.
I have webhooks created for all my Project in Asana and all project events are sent to my webhook end point that verifies the request and tries to identify the event and update Zendesk with relevant data depending on the event type (Some events aren't required).
However I have been receiving the following request from the Webhooks just recently:
"events": [
{
"action": "sync_error",
"message": "There was an error with the event queue, which may have resulted in missed events. If you are keeping resources in sync, you may need to manually re-fetch them.",
"created_at": "2017-05-23T16:29:13.994Z"
}
]
Now because I don't poll the API for event updates I react when the events arrive with me, I haven't considered using a Sync key, the docs suggest this is only required when polling for events. Do I need to use one when using Webhooks also?
What am I missing?
Thanks in advance for any suggestions.
You're correct, you don't need to track a sync key for webhooks - we proactively try to reach out with them when something changes in Asana, and we track the events that haven't yet been delivered across webhooks (essentially, akin to us updating the sync key server-side whenever webhooks have been successfully delivered).
Basically what's happening here is that for some reason, our event queues detect that there's a problem with their internal state. This means that events didn't get recorded, or webhooks didn't get delivered after a long time. Our events and webhooks try to track changes in a best-effort sense, and there are some things that can happen with our production machines that can cause these sorts of issues, like a machine dying at an inopportune time.
Unfortunately, then, the only way to get back to a good state is to do a full scan of the projects you're tracking, which is what is meant by you may need to manually re-fetch them. Basically, a robust implementation of syncing Asana to external resources looks like:
A diff function that, given a particular task and external resource, detects what state is out of date or different between each resource and choose a merge/patch resolution (i.e. "Make Zendesk look like Asana")
Receiving a webhook runs that diff/patch process for that one task in a "live" fashion.
Periodically (on script startup, say, or when webhooks/events are missed and you get an error message like this) update all resources that might have been missed by scanning the entire project and do the diff/patch for every task. This is more expensive, but should be significantly more rare.

Can two http request come together? If it can, how nodeJS server handles it?

Yesterday I was giving some presentation on nodeJS.
Some one asked me following question:
As we know nodeJS is a single threaded server, several request is
coming to the server and it pushes all request to event loop. What if
two request is coming to server at exact same time, how server will
handle this situation?
I guessed a thought and replied back following:
I guess No two http request can come to the server at exact same
time, all request come through a single socket so they will be in queue. Http
request has following format:
timestamp of request is contained in it's header and they may be pushed to the event loop depending upon their timestamp in header.
but I'm not sure whether I gave him right or wrong answer.
I guess No two http request can come to the server at exact same time,
all request come through pipe so they will be in queue.
This part is basically correct. Incoming connections go into the event queue and one of them has to be placed in the queue first.
What if two request is coming two server at exact same time, how
server will handle this situation?
Since the server is listening for incoming TCP connections on a single socket in a single process, there cannot be two incoming connections at exactly the same time. One will be processed by the underlying operating system slightly before the other one. Think of it this way. An incoming connection is a set of packets over a network connection. One of the incoming connections will have its packets before the other one.
Even if you had multiple network cards and multiple network links so two incoming connections could literally arrive at the server at the exact same moment, the node.js queue will be guarded for concurrency by something like a mutex and one of the incoming connections will grab the mutex before the other and get put in the event queue before the other.
The one that is processed by the OS first will be put into the node.js event queue before the other one. When node.js is available to process the next item in the event queue, then whichever incoming request was first in the event queue will start processing first.
Because node.js JS execution is single threaded, the code processing that request will run its synchronous code to completion. If it has an async operation, then it will start that async operation and return. That will then allow the next item in the event queue to be processed and the code for the second request will start running. It will run its synchronous to completion. Like with the first request, if it has an async operation, then it will start that async operation and return.
At that point after both requests have started their async operations and then returned, then its just up to the event queue. When one of those async operations finishes, it will post another event to the event queue and when the single thread of node.js is free, it will again process the next item in the event queue. If both requests have lots of async operations, their progress could interleave and both could be "in-flight" at the same as they fire an async operation and then return until that async operation completes where their processing picks up again when node.js is free to process that next event.
timestamp of request is contained in it's header and they may be
pushed to the event loop depending upon their timestamp in header.
This part is not really right. Incoming events of the same type are added to the queue as they arrive. The first ones to arrive go into the queue first - there's isn't any step that examines some timestamp.
Multiple concurrent HTTP requests are not a problem. Node.js is asynchronous, and will handle multiple requests in the event-loop without having to wait for each one to finish.
For reference, example: https://stackoverflow.com/a/34857298/1157037

CQRS and DDD boundaries

I've have a couple of questions to which I am not finding any exact answer. I've used CQRS before, but probably I was not using it properly.
Say that there are 5 services in the domain: Gateway, Sales, Payments, Credit and Warehouse, and that during the process of a user registering with the application, the front-end submits a few commands, the same front-end will then, once the user is registered, send a few other commands to create an order and apply for a credit.
Now, what I usually do is create a gateway, which receives all pubic commands, which are then validated, and if valid, are transformed into domain commands. I only use events to store data and if one service needs some action to be performed in other service, a domain command is sent directly from one service to the other. But I've seen in other systems that event handlers are used for more than store data. So my question is, what are the limits to what event handlers can do? And is it correct to send commands between services when a specific service requires that some other service performs an action or is it more correct to have the initial event raise and event and let the handler in the other service perform that action in the event handler. I am asking this because I've seen events like: INeedCreditAproved, when I was hoping to see a domain command like: ApprovedCredit.
Any input is welcome.
You're missing an important concept here - Sagas (Process Managers). You have a long-running workflow and it's better expressed centrally.
Sagas listen to events and emit commands. So OrderAccepted event will start a Saga, which then emit ApproveCredit and ReserveStock commands, to be sent to Credit and Warehouse services respectively. Saga can then listen to command success/failure events and compensate approprietely, like say emiting SendEmail command or whatever else.
One year ago I was sending commands like "send commands between services by event handlers when a specific service requires that some other service performs an action" but a stupid decision made by me switched to using events like you said "to have the initial event raise and event and let the handler in the other service perform that action in the event handler" and it worked at first. The most stupid decision I could make. Now I am switching back to sending commands from event handlers.
You can see that other people like Rinat do similar things with event ports/receptors and it is working for them, I think:
http://abdullin.com/journal/2012/7/22/bounded-context-is-a-team-working-together.html
http://abdullin.com/journal/2012/3/31/anatomy-of-distributed-system-a-la-lokad.html
Good luck

Resources