I have an application in which I'm creating an email which I want the SMTP server (IIS) on the same box to deliver (OS is 2003 Server 32 bit). I send this using the "cdSendUsingPickup" method.
Using my IMessage interface, I copy the message to the servers pickup directory. All works great as long as my message is below ~150MB. The size is accounted for by attachments to the mail. But if I include attachments over this limit, IMessage::GetStream() fails with 0x8007000e - not enough storage space is available to complete this operation. The server has plenty of HD space. I'm running into a some kind of space limitation and I'm thinking it's more a memory limitation, not a HD space issue but I'm finding no clues as to what's going on. Pseudo code below - the call to GetStream fails with a message bigger than 150MB or so. Works fine with smaller messages.
DlvrMsg(IMessage piMsg)
{
_StreamPtr pStream = NULL;
HRESULT hr = piMsg->GetStream(&pStream);
pStream->put_type(adTypeBinary);
//.. then use pStream->Read() to read the bytes of the message
// and copy to an .eml file in the pickup directory.
...
}
Yes apparently there is a limit, though MS won't give hard and fast rules for what that limit is. They only say the call to GetStream() fails in a call to realloc. More and more memory is reallocated until it hits some artificial limit.
This occurs in 2003 server as well as 2008 both 32 and 64 bit. Only work arounds are to use something other than CDO to send your mail.
Related
I am trying to build an Android app that interfaces with the ESP32 using BLE. I am using the RxBluetoothKotlin library from Vincent Masselis for the Android side. For the ESP32 side, I am using the default Kolban libraries that are included in the Arduino IDE. My phone is a OnePlus 5T and my ESP32 is a MH ET Live ESP32DevKIT. My Android app can be found here, and my ESP32 program here.
The whole system works pretty much perfectly for me in terms of pure functionality. That is to say, every button does what it's supposed to do, and I get the exact behaviour I had expected to get. However, the communication itself is very slow. Around 200 bytes/second. My test button in the Android app requests a bunch of text data from the ESP32, and displays this in a dialog. It also lists a number which represents the time between request and reception in milliseconds. Using this, I get around 2 seconds for 440 bytes of data. When I send less data, the time decreases approximately linearly with data size. 40 bytes of data will take around 200ms, and 20 bytes or under typically takes less than 100ms.
This seems rather slow to me. From what I understand, I should be able to at least get a few kilobytes per second. I have tried to check the speed using nRF Connect, but I get the same 2 seconds timespan for my data transfer. This suggests that the problem is not in my app, since I also have it with a completely different app. I also put the code in my main loop inside of callbacks instead (which I probably should have done in the first place), but this didn't change things at all. I have tried taking the microcontroller and my phone to a few different locations, hoping to eliminate interference. I have tried to mess with BLEDevice::setPower and BLEDevice::setMTU, as well as setting RxBluetoothGatt.requestMtu(500) on the Android side. Everything so far seems to have had little to no effect. The only thing that did anything, was adding the line "pServer->updatePeerMTU(0,500);" in my loop during the connection phase. This caused the first 23 bytes of data to be repeated whenever I pressed the test button in my app, and made the data transfer take about 3 seconds. If I'm lucky, I can get maybe a bit under 1.8 seconds for 440 bytes, but this is a very small change when I'm expecting an order of magnitude of difference, and might even be down to pure chance rather than anything I did.
Does anyone have an idea of how to increase my transfer speed?
The data transmission speed is mainly influenced by the Bluetooth LE connection interval (between 7.5 ms and 4 seconds) and is negotiated between the master (central unit) and the peripheral device. The master establishes a connection with a parameter set and the peripheral can propose to change this parameter set. In the end, however, the central unit decides which parameter set is to be used.
But the Bluetooth connection interval cannot be changed by an Android applications directly, which normally act as the central role. Instead it can request a connection priority which is known to have an influence on the connection interval.
[Edit: I found the reason, see below]
The problem:
I created a "driver" for a device in Windows using Python (PyUSB and libusb-win32). While this software works seamlessly on multiple PCs under Windows, using my Linux (Kubuntu 18.10) test laptop, a sequence of bulk writes of length 512 bytes each times out after the second 512 byte bulk transfer.
Interesting: I also tried the same using VirtualBox. And it turns out using a Windows guest via VirtualBox on the same Linux host, the same error still occurs. So it is not because of
The question:
What can happen under Linux does not happen under Windows that causes a timeout [Errno 110]?
More information, in case it helps:
Under Windows, Wireshark shows timing differences between two of the bulk writes of 6 ms for the first one and 5 ms for every following, while under Linux the delta is only round about 3 ms, which are mostly resulting from a sleep operation (relevant Python source code is attached). Doubling that time does nothing.
dmesg shows messages like 'bulk endpoint ## has invalid maxpacket 64', where ## is 0x01, 0x08 and 0x81.
The device only has one configuration.
The test laptop has only USB 3.0 connectors, where the Windows PCs have both USB 3.0 and 2.0. I tested all.
Wireshark shows the device answering with another (empty) bulk on every bulk write under Linux, while it does not show that under Windows. As far as I understand, that is because USBPcap cannot capture handshakes under Windows. But I am not shure with that, because I do not know if this type of response would really be classified as "URB_BULK out".
I tried libusb0, libusb1 and OpenUSB as backends under Linux, without success.
The bulk transfer in question is the transfer of FPGA firmware to the device.
I am able to communicate with the device before the multiple-512 byte-chunk bulk operation on the same endpoints using only a few bytes. The code that then causes the timeout is the following one in the second iteration of this for loop:
for chunk in chunks: # chunks: array of bytearrays with 512 bytes each
self.write(0x01,chunk)
time.sleep(0.003)
[Edit] The reason I found out that this only occurs on my test laptop using xhci, not on a second Linux test machine using ehci. So this might be caused by xhci. I do not yet have a workaround, but this at least gives an explanation.
It turns out that the device requested less bytes per packet, the desired amount of bytes (64) could be found in dmesg as already written in the question. Since xhci doesn't support that officially, Linux decided to ignore that request. Windows seemingly went with it and split larger packets up in the requested packet size. So the solution was to manually split the data to packets of 64 byte size before transferring it.
I've never worked with bluetooth before. I have to sends data via BLE and I've found the limit of 20 bytes per chunk.
The sender is an Arduino and the receiver could be both an Android or a Node.js app on a pc.
I have to send 9 values, stored in float values, so 4 bytes * 9 = 36 bytes. I need 2 chunks for all my data via BLE. The receiving part needs both chunks to process them. If some data are lost, I don't care.
I'm not expert in network protocols and I think I have to give each message an incremental timestamp so that the receiver can glue the two chunks with the same timestamp or discard the last one if the new timestamp is higher. But I'm not sure how to do a checksum, if I really need it or not, if I really have to care about it, or if - for a simple beta version of my system - I can ignore all those problems..
Does anyone can give me some advice? Like examples of similar situations handled with BLE communication?
You can get around the size limitation using the "Read Blob Request" of ATT. It allows you to read an attribute and also give an offset. So, you can use it to read the attribute with an offset of 0, if there's more than ATT_MTU bytes than you can request again with the offset at ATT_MTU*1, if there's still more ATT_MTU*2, etc... (You can read it in 3.4.4.5 of the Bluetooth v4.1 specifications; it's in the 4.0 spec too but I don't have that in front of me right now)
If the value changes between request, I'm not sure how you could go about detecting such a change. You could have the attribute send notifications when there's a change to interrupt the process in case the value changes in the middle of reading it.
On some computers I have the strange effect that UdpClient will not send data when UdpClient.Close() is called too soon after a UdpClient.Send().
I'm using .NET 4.0 and WireShark to verify the packet-loss.
The essential part of the coding is:
UdpClient sender = new UdpClient();
sender.Connect( new IPEndPoint( this.ipAddress, this.Port ) );
int bytesSent = sender.Send( data, data.Length );
sender.Close();
Weired is:
On most Computers data will be sent without problems
There is no exception or other error even if no packet was sent
bytesSent will always equal data.Length
On computers not sending packets a Thread.Sleep( 250 ) right before calling sender.Close() will fix the problem!
So, what could cancel the sending of packets, even after UdpClient.Send() reported the correct amount of bytes? Why is this manifesting only on certain machines? Could this be a different behaviour of some network drivers, anti-virus software or the like?
I also tried to set LingerOptions explicitly which should be unneccessary as the default is to send all pending data before closing the underlying socket. However, when doing a sender.Client.LingerState = new LingerOption( true, 10 ) (as descibed in http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.lingerstate.aspx) I get
SocketException (0x80004005): An unknown, invalid, or unsupported option
or level was specified in a getsockopt or setsockopt call.
Any ideas what's going on?
Regards,
Seven
OK, this has nothing to do with .NET and my software either.
It seems that the virus scanner also scans the complete network trafic. And so .NET library functions for sending UDP packages actially did send the package, but the scanner discards it if UdpClient.Close() was called too soon after the Send() method.
So, there are two possible work-arounds now (that worked for me)
Introduce a little sleep before calling UdpClient.Close() (about 4ms are sufficient)
Remove the virus scanner and try another one (Microsoft Security Essentials does not show this effect)
I wrote a C++ program to send a mail using SMTP. But when I attach any files I notices that a single file's size always is limited to 808 bytes. As an example if I send a text file with 10 KBs, when I download the attachment it has only text worth 808 bytes. If the large file is a zip file, it gets corrupted in unzipping obviously due to CRC failure. I used a MAPI library to send larger files without a problem. Is this a network limitation of SMTP? Can someone please explain why this is happening??
Thank You!!!
How are you attaching and encoding the files? Are you using MIME? 8-bit clean?
SMTP has no built in limits, but has specific limits in how data is transferred (formatting, etc). In general, most mail systems reject mails with greater than 5-10MB of data.