How to increase max line length in Apache James? - james

I could not find anything in the JAMES 3.3 documentation, how do I increase the maximum line length the SMTP server will accept??
I believe the default value is about 998, and I want to increase it in order to not reject (legit) mails with long lines.

The default value for the maximum line length for SMTP servers is typically around 1000 characters, so increasing it to 998 would already be close to the maximum.
If you're receiving legitimate emails with long lines that are getting rejected, there are a few possible ways to resolve this issue:
Configure the sending server to use a smaller line length: Encourage the sender
to reduce the line length in their emails to meet the limit.
Use a different SMTP server: If increasing the line length limit is not possible in JAMES, you could consider using a different SMTP server that supports a larger line length.
Consider alternative communication methods: If the rejected emails contain large attachments, you may want to consider alternative communication methods such as file-sharing services or cloud storage.

Related

What is the maximum message length for a MQTT broker?

I am using the node.js mosca MQTT broker for some internet of things (iot) application.
https://github.com/mcollina/mosca
What is the maximum message length that a topic can receive for the mosca broker? What are the factors that constrain the message length?
If I want to increase the message length, is there a configuration parameter I can modify or which part of the code can I change?
It's not entirely clear what you're asking here, so I'll answer both possibilities.
The length of the actual topic string is at most 65536 bytes. This is a limit imposed by the mqtt spec, you can't change it. It is also worth noting that the topic is encoded with utf-8, so you may have less than 65536 characters available.
The payload of the message is limited to 268,435,456 bytes. Again, this is defined by the spec.
If you are routinely approaching either of these limits you should be thinking about whether what you are doing is sensible.

Modify Header server: ArangoDB

Something that seems easy, but I don't find the way to do that. Does it possible to change the header sent in a response
server: ArangoDB
by something else (in order to be less verbose and more secure) ?
Also, I need to store a large string (very long url + lot of informations) in a document, but what is the max length of a joi.string ?
Thx,
The internal string limit in V8 (the JavaScript engine used by ArangoDB) is around 256 MB in the V8 version used by ArangoDB. Thus 256 MB will be the absolute maximum string length that can be used from JavaScript code that's executed in ArangoDB.
Regarding maximum URL lengths as mentioned above: URLs should get too long because very long URLs may not be too portable across browsers. I think in practice several browser will enforce some URL max length limits of around 64 K, so URLs should definitely not get longer than this value. I would recommend using much shorter URLs though and passing hugh payloads in the HTTP request body instead. This also means you may need to change from HTTP GET to HTTP POST or HTTP PUT, but its at least portable.
Finally regarding the HTTP response header "Server: ArangoDB" that is sent by ArangoDB in every HTTP response: starting with ArangoDB 2.8, there is an option to turn this off: --server.hide-product-header true. This option is not available in the stable 2.7 branch yet.
No, there currently is no configuration to disable the server: header in ArangoDB.
I would recommend prepending an NGiNX or similar HTTP-Proxy to achieve that (and other possible hardening for your service).
The implementation of server header can be found in lib/Rest/HttpResponse.cpp.
Regarding Joi -
I only found howto specify a string length in joi - not what its maximum could be.
I guess the general javascript limit for strings should be taken into account.
However, it rather seems that you shouldn't exceed the limit of 2000 chars for URLs which thereby should be the limit.

phpmailer error with long email addresses

I run a list of subscribed webmasters and recently migrated to PHPMailer.
I have my phpmailer script working fine. It takes values from my mysql database and send them an email but sometimes I get the Mailer Error: You must provide at least one recipient email address.
After some tests, I found the error is displayed when the full email address is more than 82 characters.
As you know, working with email address of webmasters of seo micro niche sites can give me lot of addresses with more than 82 characters since their domains are very long.
Is there any setting to change for that? I dont know why its limited if 82 characters is less than 256 characters allowed for any email address.
Any help? I searched google and stack overflow for this reply but nothing was found.
Thank you very much.
You could try using PHPMailer's validateAddress() method before trying to send, though it's used internally anyway when you call addAddress().
The unit tests contain a long list of valid and invalid addresses that test many corner cases.
I would guess that you are not running into an overall limit, but a lower limit within the domain name. Though the overall address can be 255 chars, elements within it have lower limits, for example each label within a domain (e.g. example and com in example.com) has a limit of 63 chars. 82 chars sounds like this limit plus a few other bits, like user#<somethingover63charslong>.com. If this is like the addresses you have, that's hard luck; You can't use invalid addresses and expect them to work. It's the addresses that are the problem.

How much data can I send through a socket.emit?

So I am using node.js and socket.io. I have this little program that takes the contents of a text box and sends it to the node.js server. Then, the server relays it back to other connected clients. Kind of like a chat service but not exactly.
Anyway, what if the user were to type 2-10k worth of text and try to send that? I know I could just try it out and see for myself but I'm looking for a practical, best practice limit on how much data I can do through an emit.
As of v3, socket.io has a default message limit of 1 MB. If a message is larger than that, the connection will be killed.
You can change this default by specifying the maxHttpBufferSize option, but consider the following (which was originally written over a decade ago, but is still relevant):
Node and socket.io don't have any built-in limits. What you do have to worry about is the relationship between the size of the message, number of messages being send per second, number of connected clients, and bandwidth available to your server – in other words, there's no easy answer.
Let's consider a 10 kB message. When there are 10 clients connected, that amounts to 100 kB of data that your server has to push out, which is entirely reasonable. Add in more clients, and things quickly become more demanding: 10 kB * 5,000 clients = 50 MB.
Of course, you'll also have to consider the amount of protocol overhead: per packet, TCP adds ~20 bytes, IP adds 20 bytes, and Ethernet adds 14 bytes, totaling 54 bytes. Assuming a MTU of 1500 bytes, you're looking at 8 packets per client (disregarding retransmits). This means you'll send 8*54=432 bytes of overhead + 10 kB payload = 10,672 bytes per client over the wire.
10.4 kB * 5000 clients = 50.8 MB.
On a 100 Mbps link, you're looking at a theoretical minimum of 4.3 seconds to deliver a 10 kB message to 5,000 clients if you're able to saturate the link. Of course, in the real world of dropped packets and corrupted data requiring retransmits, it will take longer.
Even with a very conservative estimate of 8 seconds to send 10 kB to 5,000 clients, that's probably fine in chat room where a message comes in every 10-20 seconds.
So really, it comes down to a few questions, in order of importance:
How much bandwidth will your server(s) have available?
How many users will be simultaneously connected?
How many messages will be sent per minute?
With those questions answered, you can determine the maximum size of a message that your infrastructure will support.
Limit = 1M (By Default)
Use this example config to specify your custom limit for maxHttpBufferSize:
const io = require("socket.io")(server, {
maxHttpBufferSize: 1e8, pingTimeout: 60000
});
1e8 = 100,000,000 : that can be good for any large scale response/emit
pingTimeout : When the emit was large then it take time and you need increase pingtime too
Read more from Socket.IO Docs
After these setting, if your problem is still remaining then you can check related proxy web server configs, like client_max_body_size, limit_rate in
nginx (if u have related http proxy to socketIO app) or client/server Firewall rules.
2-10k is fine, there arn't any enforced limits or anything, it just comes down to bandwidth and practicality.. 10k is small though in the grand scheme of things so you should be fine if that's somewhat of an upper bound for you.

Probability of finding TCP packets with the same payload?

I had a discussion with a developer earlier today re identifying TCP packets going out on a particular interface with the same payload. He told me that the probability of finding a TCP packet that has an equal payload (even if the same data is sent out several times) is very low due to the way TCP packets are constructed at system level. I was aware this may be the case due to the system's MTU settings (usually 1500 bytes) etc., but what sort of probability stats am I really looking at? Are there any specific protocols that would make it easier identifying matching payloads?
It is the protocol running over tcp that defines the uniqueness of the payload, not the tcp protocol itself.
For example, you might naively think that HTTP requests would all be identical when asking for a server's home page, but the referrer and user agent strings make the payloads different.
Similarly, if the response is dynamically generated, it may have a date header:
Date: Fri, 12 Sep 2008 10:44:27 GMT
So that will render the response payloads different. However, subsequent payloads may be identical, if the content is static.
Keep in mind that the actual packets will be different because of differing sequence numbers, which are supposed to be incrementing and pseudorandom.
Chris is right. More specifically, two or three pieces of information in the packet header should be different:
the sequence number (which is
intended to be unpredictable) which
is increases with the number of
bytes transmitted and received.
the timestamp, a field containing two
timestamps (although this field is optional).
the checksum, since both the payload and header are checksummed, including the changing sequence number.
EDIT: Sorry, my original idea was ridiculous.
You got me interested so I googled a little bit and found this. If you wanted to write your own tool you would probably have to inspect each payload, the easiest way would probably be some sort of hash/checksum to check for identical payloads. Just make sure you are checking the payload, not the whole packet.
As for the statistics I will have to defer to someone with greater knowledge on the workings of TCP.
Sending the same PAYLOAD is probably fairly common (particularly if you're running some sort of network service). If you mean sending out the same tcp segment (header and all) or the whole network packet (ip and up), then the probability is substantially reduced.

Resources