I get the exception "Cannot write to closing transport" raised from aiohttp.http_writer.StreamWriter#_write, but only in a fraction of cases.
The relevant snippet.
session: aiohttp.ClientSession
async with session.get(url, timeout=60) as response:
txt = await response.text()
response.close()
return txt
What is going on? I don't think the server-size is closing the socket.
Answer: We should create a new session on each request. There is also no need to close the response() explicitly, as the context manager handles that.
async with aiohttp.ClientSession().get(url, timeout=60) as response:
txt = await response.text()
return txt
It means that your connection already closed. It occurs when client break connection, but server still tried respond him.
Remove response.close() from your code.
This happens if a client prematurely disconnects before reading some or all of the response. You may encounter that case frequently if you're either dealing with mobile clients (which may switch between WiFi and mobile networks) or if you have views that take some time, but clients have a lower timeout. Since you can't control how clients talk to your service, it's probably safe to ignore this.
aiohttp 3.6.0 introduces code to slience this exception
Related
I have a simple script reading messages from a websocket server and I don't fully understand the keep-alive system.
I'm getting two errors with practically the same meaning sent 1011 (unexpected error) keepalive ping timeout; no close frame received and no close frame received or sent.
I am using websockets module. link to the docs
I'd like to know when is my job to send a ping or when to send a pong or if I should be changing the timeout to a longer period since I'll be running multiple connections at the same time (to the same server but a different channel).
I have tried running another asyncio task which was pinging the server every 10 to 20 seconds AND
replying only after I receive a packet (which in my case can be 1 second apart or I can get a new one the next day. with a normal websocket.ping() and with a custom payload (heartbeat json string {"event": "bts:heartbeat"}
One solution I can see is to just reopen the connection after I get the error but it feels wrong.
async with websockets.connect(self.ws,) as websocket:
packet = {
"event": "bts:subscribe",
"data": ...,
}
await websocket.send(json.dumps(packet))
await websocket.recv() # reply
try:
async for message in websocket:
tr = json.loads(message)
await self.send(tr)
packet = {"event": "bts:heartbeat"}
await websocket.pong(data=json.dumps(packet))
except Exception as e: # websockets.ConnectionClosedError:
await self.send_status(f"Suscription Error: {e}", 0)
Keep-alive packets are send automatically by the library (see https://websockets.readthedocs.io/en/latest/topics/timeouts.html#keepalive-in-websockets), so there should be no need to do that yourself.
In your case it seems like that the server is not responding to the "ping" by your client in timely manner. This FAQ entry and its recommendation to catch ConnectionClosed looks relevant.
I am using Rascal.Js(it uses amqplib) for my messaging logic with rabbitMq on node.js app.
I am using something similar to their example on my project startup, which creates a permanent instance and "registers" all of my subscribers and redirects messages when they arrive to the queue (in the background).
My issue is with the publishers. There are http requests from outside which should trigger my publishers. A user clicks on create button of sorts which leads to certain flow of actions. At some point it reaches the point at which I need to use a publisher.
And here I am not sure about the right approach. Do I need to open a new connection every time I need to publish a message? and close it after it ends? Or maybe I should implement this in a way that it keeps the same connection open for all of the publishers? (I actually not so sure how to create it in a way that it can be accessed from other parts of my app).
At the moment I am using the following :
async publishMessage(publisherName, message) {
const dynamicSettings = setupDynamicVariablesFromConfigFiles(minimalPublishSettings);
const broker = await Rascal.BrokerAsPromised.create(Rascal.withDefaultConfig(dynamicSettings.rascal));
broker.on('error', async function(err) {
loggerUtil.writeToLog('error', 'publishMessage() broker_error_event: ' + publisherName + err + err.stack);
await broker.shutdown();
})
const publication = await broker.publish(publisherName, message);
try {
publication.on('error', async function(err) {
loggerUtil.writeToLog('error', 'publishMessage() publish_error_event: ' + err + err.stack);
await broker.shutdown();
}).on("success", async (messageId) => {
await broker.shutdown();
}).on("return", async (message) => {
loggerUtil.writeToLog('error', 'publishMessage() publish_return_event: ' + err + err.stack);
await broker.shutdown();
})
}
catch(err) {
loggerUtil.writeToLog('error', 'Something went wrong ' + err + err.stack);
await broker.shutdown();
}
}
I use this function from different parts of my app when I need to publish messages.
I thought to just add the broker.shutdown() for all of the endpoints but at some point after an error, I got an exception about closing a connection which was already closed, and this got me worried about the shutdown approach (which probably not a good one). I think it is related to this -
I tried doing that (the commented code) but I think it isnt working well in certain situations. If everything is ok it goes to "success" and then I can close it.
But one time I had an error instead of success and when I tried to use broker.shutdown() it gave me another exception which crashed the app. I think it is related to this -
https://github.com/squaremo/amqp.node/issues/111
I am not sure what might be the safest way to approach this?
Edit:
Actually now that I think about it, the exception might be related to me trying to shutdown the broker in the catch{} area as well. I will continue to investigate.
Rascal is designed to be initiated once at application startup, rather than created per HTTP request. Your application will be extremely slow if you use it in this way, and depending on how many concurrent requests you need to handle, could easily exceed max number of connections you can make to the broker. Furthermore you will get none of the benefits that Rascal provides, such as failed connection recovery.
If you can pre-determine the queue or exchange you need to publish to, then configure Rascal at application start-up (prior to your http server), and share the publisher between requests. If you are unable to determine the queue or exchange until your receive the http request, then Rascal is not an appropriate choice. Instead you're better off using amqplib directly, but should still establish a shared connection and channel. You will have to handle connection and channel errors manually though, otherwise they will crash your application.
I have a security issue that someone is trying to call random APIs that are not supported on our server but are frequently used for administrators API in general. and I set this code below to handle 404 to not respond to this attack
url-not-found-handler.js
'use strict';
module.exports = function () {
//4XX - URLs not found
return ((req, res, next) => {
});
};
what happens to client is that it waits until the server responds but I want to know if this will affect the performance of my express.js server also what happens behind the scene in the server without res.send() or res.end() ?
According to the documentation of res.end().
Ends the response process. This method actually comes from Node core,
specifically the response.end() method of http.ServerResponse.
And then response.end
This method signals to the server that all of the response headers and
body have been sent; that server should consider this message
complete. The method, response.end(), MUST be called on each response.
If you leave your request hanging, the httpserver will surely keep data about it. Which means that if you let hang many requests, your memory will grow and reduce your server performance.
About the client, he's going to have to wait until he got a request timeout.
The best to do having a bad request is to immediately reject the request, which is freeing the memory allowed for the request.
You cannot prevent bad requests (maybe have a firewall blocking requests from certains IP address?). Best you can do is to handle them as fast as possible.
I am using nodejs to implement a server application with XMPP. I am following the guide to authorize an XMPP connection. My problem is exactly when I expect a
<success xmlns="urn:ietf:params:xml:ns:xmpp-sasl"/>
when I send the server key, a SASL PLAIN authentication. It is made this way
const key = Buffer('\x00' + senderId + '#gcm.googleapis.com\x00' + serverKey).toString('base64');
const message = `<auth mechanism="PLAIN"
xmlns="urn:ietf:params:xml:ns:xmpp-sasl">${key}</auth>`;
Where senderID is that number that is in "Cloud Messaging" tag and
serverKey is one of the server keys from the "Cloud Messaging" tag. There are two server keys types: one is the "normal" and the other one is inherited; I've used both types without success.
I don't really know what I am doing wrong, or what I am missing.
The first two steps of the connection, the 'hello' and the list of mechanisms response from FCM are done. However, after this, FCM closes the connection. I suspect is related with this problem.
I would appreciate a help. Thanks.
I've contacted with the Firebase support team and they have solved my problem (thanks a lot).
The thing, with nodejs, is you have to avoid to implement the event 'end' on the socket because this seems to force to close the socket, and use the same socket. Another thing is to avoid set up the socket encoding. You can convert the buffer with another encoding though.
With all this I can mark this question as solved.
The only way I have found to "catch" EPIPE errors thrown asynchronously by a socket timing out or closing prematurely is to directly attach an event handler to the socket object itself, as demonstrated in the documentation here:
https://nodejs.org/api/errors.html
const net = require('net');
const connection = net.connect('localhost');
// Adding an 'error' event handler to a stream:
connection.on('error', (err) => {
// If the connection is reset by the server, or if it can't
// connect at all, or on any sort of error encountered by
// the connection, the error will be sent here.
console.error(err);
});
This works, but is in many cases unhelpful -- if you're accessing a database or another service that has a node driver, the request and socket objects are likely inaccessible from your app code.
The most obvious solution is "don't do things that generate these errors" but since any non-trivial application is dependent on other services, no amount of input-checking in advance can guarantee that the service on the other end won't hang up unexpectedly, throwing an EPIPE in your code and in all likelihood crashing Node.
So, the options for handling this situation seem to be:
Let the error crash your app and use nodemon or supervisor to automatically restart. This isn't clean, but it seems like the only way to really guarantee you'll get back up and running safely.
Write custom connection clients for dependent services. This let's you attach error handlers where known problems could occur. But it violates DRY and means that you're now on the hook for maintaining your own custom client code when otherwise reasonable open source solutions already exist. Basically, it adds a huge maintenance burden for a slightly cleaner solution to a fairly rare problem.
Am I missing something, or are those the best options available?