I have a Koa application that has a multipart/form-data file upload that has suddenly stopped working. I have spent over 8 hours now trying to isolate the issue. What I've tried/verified:
Not a Node 6 issue; same problem occurs with Node 4 (which was previously working).
Have ruled out version issues in packages.json; have tested against originally working versions of all relevant packages, and latest versions.
Issue exhibits in latest Chrome and latest Firefox.
Issue does NOT exhibit when POSTing directly from Postman with exact same headers as browser is sending (excepting Cookie and Referer, neither of which can be set in Postman).
Problem exhibits with Koa wrappers koa-better-body and koa-multer.
Problem exhibits when directly using busboy, formidable, and even multiparty.
Similar to problems people were reporitng on this multer issue; tried all suggestions (including the long shot of adding field parameters before file parameter) to no avail.
Have tried to recreate minimal test case to reproduce, but have been unable to.
Have tried whittling down my app line by line, examining Babel output against minimal test case until they are functionally identical, problem still persists in my app, but not in test case.
All tests running on the same server, with the same browsers.
When debugging, the cleanest view of the problem is with formidable, in incoming_form.js. A single data event occurs:
Then an abort event:
After that, the browser eventually times out. (The file is larger than the 15 bytes being received in the first data event.)
I had hoped for a quick fix by switching from formidabl to busboy, and now I am a real bind, because this problem needs to get fixed, and I am running out of ways to look at the problem. I've tried to slice it every way I can think of, debug it every way I can think of, and short of writing my own multipart parser (not a task I would relish), I'm fast running out of options.
Has anyone run across this? Do you have any ideas how I might proceed with debugging or producing a minimum test case?
It turns out the issue was with koa-proxy: it doesn't correctly forward multipart POST requests. I fixed it by switching to koa-proxy2, and I will look into contributing a fix to the koa-proxy project.
Related
I've created up a nodejs mono-repo project using tuborepo and the kitchen-sink template.
Here I have some applications, the code that I am going to release, and some packages, basically are shared functions and logic that the applications may or may not use.
It worked well, until some days ago, when I've found a strage behavior in some of my apps.
I have to make some changes to an application, so I've opened up Visual Studio Code, I've run yarn dev, and I was ready to code, but when I've started the application that need to update, I've noticed that it won't start, usually it takes some time, like 5 seconds at most, but this time it was taking up to 30-40 seconds util the node process exited.
I've tried with another application that is very similar but differs in some way, and I've encountered the same issue, so I've decided to create and empty test suite check if everything was still working fine, and it did, I mean the node process was not crashing and I've had the output I expected.
Then I've started to import from the packages the modules I needed, and I've noticed that a particular one causes this weird problem.
This module depends on other packages, and its main purpose is to serve up classes and functions that may be use from the application.
The problem is: when use the import ... from ... from the application code, the process hangs, then sometimes it can start properly so the code is executed, sometimes the node process exits, without an error message and with a status code of 0.
I've tried to use the process.on("uncaughtException") and process.on("unhandledException") but neither of them are ever fired, segault-handler doesn't help either since it does not report anything.
I've checked the code of this module, and it does not execute some functions or perform logic while it is importing, so I don't think it has to do with slow running functions.
I have only one theory, circular dependencies so a js file to includes another js file which itself includes the first one? but I don't think so since the code used to work just fine.
I've to use a backup of the code I have when It worked, but still, I do have the same error
I was wondering guys if you could please give me some hints to fix the bug, or if you had a similar issue explain how to solve.
I am trying to develop a simple web page that allows a user to play a retro game (like Mario) using his browser. For this I have decided to use the js emulators that have been compiled from retroarch using emscripten. I have been told that some of the js emulators available on libretro website currently (https://buildbot.libretro.com/stable/1.7.0/emscripten/) do not work properly (example: n64 js emulator). So, I am trying to use the older version available on play-roms.com but I have not been able to make it work even after a lot of work.
The problem
I am trying to just replicate this game page to work locally on my machine: https://play-roms.com/nintendo-64/super-mario-64 Since, it is mostly dependent on HTML, CSS and JS, I simply copied all the HTML,CSS, JS files and also the emulator and .mem files. When I tried to make them work locally, they simply do not work. I get a constant warning in console in infinite loop:
"RetroArch [libretro INFO] :: mupen64plus: Memory initialized"
This warning does not allow the game to load. Please note that I do not get any other warning or errors on the console which are not already happening on the original mario page of play-roms from which I copied the files.
I assume that the problem is happening because of some issue with .mem file. Next, I tried to fetch the mem file from play-roms server itself (just for testing purpose) but that also did not help. (Please note that I am aware of CORS and know how to handle it). I still get the same error even when mem file is fetched from play-roms
I talked to someone who has worked in this area before and he confirmed that he too faced the exact same issue of "Memory initialized" in infinite loop when he tried it. He too could not solve it.
Please note that copying some other website is not my goal. I am just trying to make the retroarch js emulators work for my website.
So before the question I wanna point out that the only thing I could find on this issue was on this stackoverflow question. This issue suggest that this was an issue with wappalyzer, and it was fixed in version 4.0.1. however I am using wappalyzer version 5.1.4 and is up to date with this.
I am building a web-app based on the MEAN-stack, everything worked as intended for a long time, until this error kept poping up in my google-chrome console:
Everytime i would click in my app header, and use my front-end routing to load up different components / modules this error appears, however I dont see any issue with what the web-app presents to me (it's not like I am missing data)
More details on the error:
I have no idea whats going on, or where this issue comes from.
This was due to a failing plugin.
Disable all plugins, and enable them one at a time to find failing chrome extensions.
In this case it was the wappalyzer extension.
Our NoFlo graph components have suddenly compressed themselves all into one uneditable box that says "WaitForward". See attached image.
For a while, this was happening on every browser, except Opera, so I could go in there and update graphs. Then, a couple of weeks later, even Opera wouldn't render the components, so now I am unable to add anymore logic to existing NoFlo forms.
We barely touch code related to NoFlo, so I don't think anything changed in our environment. My theory is that browsers (such as Chrome, which used to be the one stable browser to use for editing) have been updated recently, and this tool needs some kind of an update in order to render properly. Yet I can find no reference to this issue on the NoFlo GitHub instructions, and it doesn't look like anyone is having that issue here on StackOverflow (until now, of course).
The error message in the console says::
"TypeError: this.node.getTransformToElement is not a function"
I plunked this error into Google and saw that others are experiencing this with something called clientIO, and that recent updates to Google Chrome are to blame, as Chrome has recently removed a core feature that allowed related js to function.
But ... how can I fix this? That is the question!
It looks like recent updates to Google Chrome are the culprit.
Taken straight from jointjs.com's website::
Link to announcement from jointjs.com
Announcement: getTransformToElement() polyfill Nov 12th, 2015
Unfortunately, a new version of Chrome (48) removes a feature that is core to JointJS/Rappid. This feature is the SVGGraphicsElement.getTransformToElement() function. The motivation behind removing the method is - according to the Chrome team - open issues about how this method is supposed to behave.
To overcome compatibility issues with future versions of Chrome, we prepared a polyfill that makes sure this method exists. Before a new version of JointJS/Rappid is released (or if you, for any reason, don't want to upgrade), include the following code before you load your application JavaScript:
SVGElement.prototype.getTransformToElement = SVGElement.prototype.getTransformToElement || function(toElement) {
return toElement.getScreenCTM().inverse().multiply(this.getScreenCTM());
};
I was unsure exactly where to put this code in my noflo directory. So I tried putting it at the tippy top of the "app/js/main.js" file. It seems to be working! (But advice for a better location is more than welcome.)
I hope this helps anyone else out there who is experiencing the same issue.
I have an application that constantly sends requests to server. I use GetWebResponse() method of WebClient class for sending requests. But after a few requests, it starts throwing timeout exceptions. This happens only on Mono/Linux. The same code runs without any exceptions on .Net/Windows. Do you have any ideas what might be the problem?
Note: I tried setting Timeout and ReadWriteTimeout properties of the requests with no luck.
I would try these possible alternatives to try to solve the problem:
Upgrade Mono to 3.0.x. There have been a lot of fixes in the last months around WebRequests.
If the above doesn't help, try Mono 3.2 (as it defaults to use a new garbage collector, much faster, called SGEN).
If the above doesn't help, build your own Mono (master branch), as this important pull request has been merged recently.
If the above doesn't help, use the "--server" flag when calling your mono executable (this feature is only available in the last version of mono, which you need to compile from the master branch).
If all the above doesn't help, then CC yourself in this bug, as I think I'll have time in August to implement a fix for it, and maybe it helps you.