Node.js : EBADF, Bad file descriptor - node.js

If I reload my application (from the browser with the reload button) a lots of times like 50 reload/10 seconds it gives me this error:
events.js:45
throw arguments[1]; // Unhandled 'error' event
^
Error: EBADF, Bad file descriptor
This seems to me like a bandwidth error or something like that, originally I've got the error when I played with the HTML 5 Audio API, and If I loaded the audio file 10-15 times sequentially then I've got the error, but now I've discovered that I get the error without the Audio API too just by reloading the site a lots of times, also Safari gives me the error much faster than Chrome (WTF?)
I'm using Node.js 0.4.8 with express + jade and I'm also connected to a MySQL database with the db-mysql module.
I can't find any articles on the web about this topic what helps, so pleeease let me know what can cause this error because it's really confusing :(

By "reload your application" do you mean refresh your app's home page from a browser, or actually stop and restart the node.js server process? I assume the former, in which case if you can't reliably reproduce this it will be pretty tricky to debug, especially since you don't have a good stack trace to pinpoint the source. But if you use the express.js app.error hook (docs here) you'll want to log the error path from the "Bad file descriptor" error, which should hopefully clue you in to whether this is a temporary file that got deleted or what. In terms of the actual cause, we can only offer guesses since "Bad file descriptor" is a very generic low level error that basically means you are calling an operation on a file descriptor that is no longer in the correct state to handle that operation (like reading a closed file, opening a file that has been deleted, etc).

#CIRK, take a look at this: https://github.com/joyent/node/issues/1189
it's not a node problem, but a system tuning issue.
edit: or maybe it's related to this error in connect 1.4.3:
https://github.com/senchalabs/connect/issues/297
if this is your case, just try to upgrade it

This error may result from using fs to save a file whose name is a number rather than a string. File names must be strings:
Incorrect:
const fileName = 12345;
const fileContent = "The great croissant."
fs.writeFileSync(fileName, fileContent);
Correct:
fs.writeFileSync(`${fileName}`, fileContent);
Also correct:
const fileName = "12345";
fs.writeFileSync(fileName, fileContent);

Related

What is Comunication Failure in Shopware 5?

when creating new theme there's error occurred.
0 - Communication Failure
Why this happen? could you please help me?
This usually happens due to a timeout that occurs when the Theme-controller tries to read the Theme's configuration for the first time. Unfortunately, this is quite a resource-heavy process; on weaker servers, timeouts may occur during this process quite often.
You can confirm this by opening the Theme-Manager, opening your browser's developer tools, refreshing the Theme overview and look at the response of the backend/Themes/list-Request.
You can give your server more time with the php-function set_time_limit. In engine/Shopware/Components/Theme/Installer.php, in the synchronize-method, prepend set_time_limit(0):
public function synchronize()
{
set_time_limit(0);
$this->synchronizeThemes();
}
Alternatively, prepend set_time_limit(0); to your shopware.php file, but don't forget to remove it again once the theme-overview loaded successfully.

SailJs is Deleting Data from pg database

Something strange is happening with my app, I am using SailsJs with official PostgreSQL driver and my data gets deleted. I don't have any pattern or list of specific events which deletes the data but I have following observations.
Few days back i was writing a function to destroy data and when I
executed that function it gave me an error I fixed the error and ran
my web app again and whoa data from one of my table was all gone.
Yesterday i wrote a function and I tried to get the HTTP call to that
function but it was giving me 500 server error, I started debugging it
and after executing my program 3 to 4 times with this error partial
data was deleted from one of my database table. Later the error was i
had a typo in URL.
If any of you guys had any experience with what is happening to me please let me know how to fix it? or at least help me on how to reproduce this issue ?
EDIT
I activated the logs and was waiting for it to happen again and it happened again and here is the log from sailsjs
In the logs I saw that its talking about alter.js sync strategy but i have selected it to be the safe strategy
It has happened to me quite a few times, when lifting the app and it is in the process of making changes to the db and it fails, sometimes due to ORM timeout.
What sails do when its lifting and needs to update the data structure is controlled in config/models.js migrate: 'alter', usually commented out, you get a prompt for what to do 1... 2... 3... (writing from the top of my head, i dont remember the actual messages) and a warning about using alter on a production system.
Changing
config/orm.js to have this
// config/orm.js
module.exports.orm = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
And for reasons I don't know changing config/pubsub.js
// config/pubsub.js
module.exports.pubsub = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
has helped me, avoid data loss.

How to catch loading errors with soundjs "createjs.Sound.registersound"?

Using, for instance, SoundJS 0.5.2, in a browser like Firefox, I'm fine loading files that exist. Not so good recovering from loading files that don't exist or have other problems It seems that registerSound won't tell me there's a problem. Maybe I'm just not asking nicely.
For example,
createjs.Sound.addEventListener( "fileload", function () {
console.log( "it loaded" ) ;
} ) ;
createjs.Sound.registerSound( 'http://xx.yy.zzz/missing.ogg', 'foo' ) ;
Works fine, printing "it loaded" if I point to a URL that loads correctly, but where is the hook I can use to catch the case when I'm trying to load a non-existent file? Or handle some other error? I'm not worried about exotic things like a file taking a long time to load... I'd be happy just catching the case of a 500 response that comes back immediately.
I'm hoping there's an "onerror" handler to register somewhere and I'm just too dense to find it.
the intention with internal SoundJS loading was to give very simple loading with no extra features. Its built on the assumption that everything will work, and in the case of failure it will fail silently. For more complex loading, we recommend using PreloadJS.
Hope that helps.

Java exception: "Can't get a Writer while an OutputStream is already in use" when running xAgent

I am trying to implement Paul Calhoun's Apache FOP solution for creating PDF's from Xpages (from Notes In 9 #102). I am getting the following java exception when trying to run the xAgent that does the processing --> Can't get a Writer while an OutputStream is already in use
The only changes that I have done from Paul's code was to change the package name. I have isolated when the exception happens to the SSJS line: var jce: DominoXMLFO2PDF = new DominoXMLFO2PDF(); All that line does is instantiate the class, there is no custom constructor. I don't believe it is the code itself, but some configuration issue. The SSJS code is in the beforeRenderResponse event where it should be, I haven't changed anything on the xAgent.
I have copied the jar files from Paul's sample database to mine, I have verified that the build paths are the same between the two databases. Everything compiles fine (after I did all this.) This exception appears to be an xpages only exception.
Here's what's really going on with this error:
XPages are essentially servlets... everything that happens in an XPage is just layers on top of a servlet engine. There are basically two types of data that a servlet can send back to whatever is initiating the connection (e.g. a browser): text and binary.
An ordinary XPage sends text -- specifically, HTML. Some xAgents also send text, such as JSON or XML. In any of these scenarios, however, Domino uses a Java Writer to send the response content, because Writers are optimized for sending Character data.
When we need to send binary content, we use an OutputStream instead, because streams are optimized for sending generic byte data. So if we're sending PDF, DOC/XLS/PPT, images, etc., we need to use a stream, because we're sending binary data, not text.
The catch (as you'll soon see, that's a pun) is that we can only use one per response.
Once any HTTP client is told what the content type of a response is, it makes assumptions about how to process that content. So if you tell it to expect application/pdf, it's expecting to only receive binary data. Conversely, if you tell it to expect application/json, it's expecting to only receive character data. If the response includes any data that doesn't match the promised content type, that nearly always invalidates the entire response.
So Domino in its infinite wisdom protects us from making this mistake by only allowing us to send one or the other in a single request, and throws an exception if we disobey that rule.
Unfortunately... if there's any exception in our code when we're trying to send binary content, Domino wants to report that to the consumer... which tries to invoke the output writer to send HTML reporting that something went wrong. Except we already got a handle on the output stream, so Domino isn't allowed to get a handle on the output writer, because that would violate its own rule against only using one per response. This, in turn, throws the exception you reported, masking the exception that actually caused the problem (in your case, probably a ClassNotFoundException).
So how do we make sure that we see the real problem, and not this misdirection? We try:
try {
/*
* Move all your existing code here...
*/
} catch (e) {
print("Error generating dynamic PDF: " + e.toString());
} finally {
facesContext.responseComplete();
}
There are two reasons this is a preferred approach:
If something goes wrong with our code, we don't let Domino throw an exception about it. Instead, we log it (instead of using print to send it to the console and log, you could also toss it to OpenLog, or whatever your preferred logging mechanism happens to be). This means that Domino doesn't try to report the error to the user, because we've promised that we already reported it to ourselves.
By moving the crucial facesContext.responseComplete() call (which is what ultimately tells Domino not to send any content of its own) to the finally block, this ensures it will get executed. If we left it inside the try block, it would get skipped if an exception occurs, because we'd skip straight to the catch... so even though Domino isn't reporting our exception because we caught it, it still tries to invoke the response writer because we didn't tell it not to.
If you follow the above pattern, and something's wrong with your code, then the browser will receive an incomplete or corrupt file, but the log will tell you what went wrong, rather than reporting an error that has nothing to do with the root cause of the problem.
I almost deleted this question, but decided to answer it myself since there is very little out on google when you search for the exception.
The issue was in the xAgent, there is a line importPackage that was incorrect. Fixing this made everything work. The exception verbage: "Can't get a Writer while an OutputStream is already in use" is quite misleading. I don't know what else triggers this exception, but an alternative description would be "Java class ??yourClass?? not found"
If you found this question, then you likely have the same issue. I would ignore what the exception actually says, and check your package statements throughout your application. The java code will error on its own, but your SSJS that references the java will not error until runtime, focus on that code.
Update the response header after the body can solve this kind of problem, example :
HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse();
response.getWriter().write("<html><body>...</body></html>");
response.setContentType("text/html");
response.setHeader("Cache-Control", "no-cache");
response.setCharacterEncoding("UTF-8");

Websockets with Streaming Archives

So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.

Resources