I'm trying to use p5.serial to display my Arduino-like device's USB output on a web page. It generates about ten strings a second continually.
the problem:
When I run p5serial (in a shell window) or p5.serialcontrol (an Electron/GUI app), the node server starts out at ~ 12 MB, but as it runs it bloats quickly to > 1 GB and the output becomes sluggish. The server eventually dies with
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
...
Abort trap: 6
the question:
Is this a known issue (aside from the bug report I just filed)? Or perhaps an error in the way I'm using it?
some details:
When I connect the Arduino-like device via a serial USB terminal, things work just fine (except for the lack of lovely p5.js graphics).
I'm running OS X (10.12.6 / Sierra), node v6.3.0, p5.serialserver#0.0.24
I've posted a gist containing a minimal example (but understand that it assumes you have an Arduino-like device with USB).
This memory link was fixed in p5.serial: https://github.com/p5-serial/p5.serialcontrol/issues/12
Related
I tried Brave on Mac with the command in the answer by K J in the following question. But after running many such conversions, I may end up with a message of -bash: fork: retry: Resource temporarily unavailable in a terminal. It seems that too many threads are used and not cleaned afterward. What is going wrong here?
How to use brave to automate printing html to pdf?
Ok I guess this may not be normal type of answer as its kind of "works for me"
In cases such as this where the "programming" is simply one cross platform command, that in its dependencies uses system and application resources, there are times one user has problems and others do not. Thus debugging can be highly system dependent.
By way of explanation as to potential issues (and its too long for simple comments) here are my experiences on Windows.
pre running (why so many windows processes !!)
Fresh boot
Apps=3
one is system folder explorer
one is this notepad
one is task manager monitor
Background processes=82 Including Edge (inactive=5 !!)
processes=107 including console = 5 !!
Start command terminal
Apps =+1 with 3 sub processes ?
Background =+1 command prompt
processes =+1 (console now = 6)
Start Brave portable
apps =+1 Brave no page requested only welcome but with 8 sub processes !#?
background =+1 brave portable
processes =same (console still = 6)
Navigate to this page
apps = same Brave with this page requested = 9 then drops back to 8 sub processes !
background = same 1 brave portable
processes =same (console still = 6)
Run 20 similar commands with/without --enable-logging
Mea Culpa (Idiot) 20 fails several times because I did not verify if will run without a running Brave nor test bad cut and paste
but looks like no residual change to processes ??
Try again with brave closed
Apps and Background processes returns to before Brave active
for /l %a in (1,1,20) do brave-portable --headless --print-to-pdf="C:\Users\K\Downloads\brave-portable\test2-%a.pdf" --disable-extensions --print-to-pdf-no-header --disable-popup-blocking --run-all-compositor-stages-before-draw --disable-checker-imaging "https://stackoverflow.com/questions/74788259/how-to-use-brave-to-automate-printing-html-to-pdf"
Hmmm without error checking there is some noticed difference to an earlier run
call completes in a few seconds thus its much too quick to see tasks listed in manager.
Background processes ramped up to 194 !! before drop back to about 78
and after about 20 seconds there are 19 same size files (as almost might be expected)
Now what is odd about that is that usually from experience they should all be different sizes as each call should show different in-page adverts over time
but then again I had logged in and accepted cookies earlier, so there should be no ads to make a difference in later runs.
EXCEPT ONE rogue file out of 20 has an advert Arghhhhhhhh!!
so the inconsistencies saga continues.
However there is no residual use of task processes in my Windows portable Brave with that command sequence!
On its own Brave is using only a few percent of CPU and Memory before and after with no hint of tying up disk or other resource features.
I'm working on a project that will have SPIFFS, Bluetooth and WiFi libraries. The program is all set so the librarys don't interfere in the communication, since Bluetooth can't work when WiFi is set. But I'm getting the following problem when I attempt to add a command line from the library https://github.com/mobizt/Firebase-ESP32, this library is responsable of making connection to firestore database:
text section exceeds available space in boardSketch uses 1517102 bytes (115%) of program storage space. Maximum is 1310720 bytes.
Global variables use 63300 bytes (19%) of dynamic memory, leaving 264380 bytes for local variables. Maximum is 327680 bytes.
Sketch too big; see http://www.arduino.cc/en/Guide/Troubleshooting#size for tips on reducing it.
Error compiling for board DOIT ESP32 DEVKIT V1.
I only get this error when adding this piece of code:
Firebase.begin(&config, &auth);
Firebase.reconnectWiFi(true);
I'm using arduino ide to work with esp32, but I have esp-idf in case it helps solving the issue.
It looks like you've reached the limit of the partition size for your application. I don't know how Arduino configures the ESP IDF partitions, but you should be able to change them however you want. See the documetation on partitions
I get this every time I try to create an account to ask this on Stack Overflow:
Oops! Something Bad Happened!
We apologize for any inconvenience, but an unexpected error occurred while you were browsing our site.
It’s not you, it’s us. This is our fault.
That's the reason I post it here. I literally cannot ask it on Overflow, even after spending hours of my day (on and off) repeating my attempts and solving a million reCAPTCHA puzzles. Can you maybe fix this error soon?
With no meaningful/complete examples, and basically no documentation whatsoever, I've been trying to use the "shmop" part of PHP for many years. Now I must find a way to send data between two different CLI PHP scripts running on the same machine, without abusing the database for this. It must work without database support, which means I'm trying to use shmop, but it doesn't work at all:
$shmopid = shmop_open(1, 'w', 0644, 99999); // I have no idea what the "key" is supposed to be. It says: "System's id for the shared memory block. Can be passed as a decimal or hex.", so I've given it a 1 and also tried with 123. It gave an error when I set the size to 64, so I increased it to 99999. That's when the error changed to the one I now face above.
shmop_write($shmopid, 'meow 123', 0); // Write "meow 123" to the shared variable.
while (1)
{
$shared_string = shmop_read($shmopid, 0, 8); // Read the "meow 123", even though it's the same script right now (since this is an example and minimal test).
var_dump($shared_string);
sleep(1);
}
I get the error for the first line:
shmop_open(): unable to attach or create shared memory segment 'No error':
What does that mean? What am I doing wrong? Why is the manual so insanely cryptic for this? Why isn't this just a built-in "superarray" that can be accessed across the scripts?
About CLI:
It cannot work in standalone CLI processes, as an answer here says:
https://stackoverflow.com/a/34533749
The master process is the one to hold the shared memory block, so you will have to use php-fpm or mod_php or some other web/service-running version, and maybe even start/request/stop it all from a CLI php script.
About shmop usage itself:
Use "c" mode in shmop_open() for creating the segment before it can be used with "a" or "w".
I stumbled on this error in a different scenario where shared memory is completely optional to speed up some repeated operations. So I wanted to try reading first without knowing memory size and then allocate from actual data when needed. In my case I had to call it as #shmop_open() to hide this error output.
About shmop on Windows:
PHP 7 crashed Apache worker process causing its restart with status 3221225477 when trying to reallocate a segment with the same predefined (arbitrary number) key but different size, even after shmop_delete(). As a workaround for this, I took filemtime() of the source file which contains data to be stored in memory, and used that timestamp as the key in shmop_open(). It still was not flawless IIRC, and I don't know if it would cause memory leaks, but it was enough to test my code which would mainly run on Linux anyway.
Finally, as of PHP version 8.0.7, shmop seems to work fine with Apache 2.4.46 and mod_php in Windows 10.
[Edit: I found the reason, see below]
The problem:
I created a "driver" for a device in Windows using Python (PyUSB and libusb-win32). While this software works seamlessly on multiple PCs under Windows, using my Linux (Kubuntu 18.10) test laptop, a sequence of bulk writes of length 512 bytes each times out after the second 512 byte bulk transfer.
Interesting: I also tried the same using VirtualBox. And it turns out using a Windows guest via VirtualBox on the same Linux host, the same error still occurs. So it is not because of
The question:
What can happen under Linux does not happen under Windows that causes a timeout [Errno 110]?
More information, in case it helps:
Under Windows, Wireshark shows timing differences between two of the bulk writes of 6 ms for the first one and 5 ms for every following, while under Linux the delta is only round about 3 ms, which are mostly resulting from a sleep operation (relevant Python source code is attached). Doubling that time does nothing.
dmesg shows messages like 'bulk endpoint ## has invalid maxpacket 64', where ## is 0x01, 0x08 and 0x81.
The device only has one configuration.
The test laptop has only USB 3.0 connectors, where the Windows PCs have both USB 3.0 and 2.0. I tested all.
Wireshark shows the device answering with another (empty) bulk on every bulk write under Linux, while it does not show that under Windows. As far as I understand, that is because USBPcap cannot capture handshakes under Windows. But I am not shure with that, because I do not know if this type of response would really be classified as "URB_BULK out".
I tried libusb0, libusb1 and OpenUSB as backends under Linux, without success.
The bulk transfer in question is the transfer of FPGA firmware to the device.
I am able to communicate with the device before the multiple-512 byte-chunk bulk operation on the same endpoints using only a few bytes. The code that then causes the timeout is the following one in the second iteration of this for loop:
for chunk in chunks: # chunks: array of bytearrays with 512 bytes each
self.write(0x01,chunk)
time.sleep(0.003)
[Edit] The reason I found out that this only occurs on my test laptop using xhci, not on a second Linux test machine using ehci. So this might be caused by xhci. I do not yet have a workaround, but this at least gives an explanation.
It turns out that the device requested less bytes per packet, the desired amount of bytes (64) could be found in dmesg as already written in the question. Since xhci doesn't support that officially, Linux decided to ignore that request. Windows seemingly went with it and split larger packets up in the requested packet size. So the solution was to manually split the data to packets of 64 byte size before transferring it.
I am using Asterisk 1.6.2.24 for call handling. I am using 6 port ( 180 channels) to handle the calls. When call volume is around 140 to 150 my asterisk gets killed without any error message even when we enabled the full log in asterisk logger.conf file. 16GB RAM is available in the asterisk running machine. In out of 16GB, 5GB free space also available when asterisk gets killed.
Please let us know what could be the reason?
Simple solution: upgrade your asterisk to latest stable.
More complex - find what exactly result core dump
http://www.voip-info.org/wiki/view/Asterisk+debugging
this could be an issue with asterisk itself and you can check in the below link for fix...
https://issues.asterisk.org/jira/browse/ASTERISK-16892