I'm writing a client-side application which should read in a file, transform its content and then export the result. To do this, I decided on Re-Frame.
Now, I'm just starting to wrap my head around Re-Frame and cloujurescipt itself and got the following thing to work:
Somewhere in my view functions, I send this whenever a new file gets selected via a simple HTML input.
[:input {:class "file-input" :type "file"
:on-change #(re-frame/dispatch
[::events/file-name-change (-> % .-target .-value)])}]
What I get is something like C:\fakepath\file-name.txt, with fakepath actually being part of it.
My event handler currently only splits the name and saves the file name to which my input above is subscribed to display the selected file.
(re-frame/reg-event-db
::file-name-change
(fn [db [_ new-name]]
(assoc db :file-name (last (split new-name #"\\")))))
Additionally I want to read in the file to later process it locally. Assuming I'd just change my on-change action and the event handler to do this instead, how would I do it?
I've searched for a while but found next to nothing. The only things that came up where other frameworks and such, but I don't want to introduce a new dependency for each and every new problem.
I'm assuming you want to do everything in the client using HTML5 APIs (eg. no actual upload to a server).
This guide from MDN may come handy: https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications
It seems you can subscribe to the event triggered when the user selects the file(s), then you can obtain a list of said files, and inspect the files contents through the File API: https://developer.mozilla.org/en-US/docs/Web/API/File
In your case, you'll need to save a reference to the FileList object from the event somewhere, and re-use it later.
Related
I set up the command handler for my bot using the Discord.js guide (I am relatively new to Discord.js, as well as JavaScript itself, I'd say). However, as all my commands are in different files, is there a way that I can share variables between the files? I've tried experimenting with exporting modules, but sadly could not get it to work.
For example (I think it's somewhat understandable, but still), to skip a song you must first check if there is actually any audio streaming (which is all done in the play file), then end the current stream and move on to the next one in the queue (the variable for which is also in the play file).
I have gotten a separate music bot up and running, but all the code is in one file, linked together by if/else if/else chains. Perhaps I could just copy this code into the main file for my other bot instead of using the command handler for those specific commands?
I assume that there is a way to do this that is quite obvious, and I apologize if I am wasting peoples' time.
Also, I don't believe code is required for this question but if I'm wrong, please let me know.
Thank you in advance.
EDIT:
I have also read this question multiple times beforehand and have tried the solution, although I haven't gotten it to work.
A simple way to "carry over" variables without exporting anything is to assign them to a property of your client. That way, wherever you have your client (or bot) variable, you also have access to the needed information without requiring a file.
For example...
ready.js (assuming you have an event handler; otherwise your ready event)
client.queue = {};
for (guild of client.guilds) client.queue[guild.id] = [];
play.js
const queue = client.queue[message.guild.id];
queue.push({ song: 'Old Town Road', requester: message.author.id });
queue.js
const queue = client.queue[message.guild.id];
message.channel.send(`**${queue.length}** song${queue.length !== 1 ? 's' : ''} queued.`)
.catch(console.error);
So far I've only been able to get text and links from what other people on my discord channel type, but I want to be able to save posted images/gifs. is there any way I can do this through the bot or is it impossible? I'm using discord.js.
Images in Discord.js come in the form of MessageAttachments via Message#attachments. By looping through the amount of attachments, we can retrieve the raw file via MessageAttachment#attachment and the file type using MessageAttachment#name. Then, we use node's FileSystem to write the file onto the system. Here's a quick example. This example assumes you already have the message event and the message variable.
const fs = require('fs');
msg.attachments.forEach(a => {
fs.writeFileSync(`./${a.name}`, a.file); // Write the file to the system synchronously.
});
Please note that in a real world scenario you should surround the synchronous function with a try/catch statement, for errors.
Also note that, according to the docs, the attachment can be a stream. I have yet to have this happen in the real world, but if it does it might be worth checking if a is typeof Stream, and then using fs.createWriteStream and piping the file into it.
UPDATE: See MarkLogic 8 - Stream large result set to a file - JavaScript - Node.js Client API for someone's answer on how to do this in Javascript. This question is specifically asking about XQuery.
I have a web application that consumes rest services hosted in node.js.
Node simply proxies the request to XQuery which then queries MarkLogic.
These queries already have paging setup and work fine in the normal case to return a page of data to the UI.
I need to have an export feature such that when I put a URL parameter of export=all on a request, it doesn't lookup a page anymore.
At that point it should get the whole result set, even if it's a million records, and save it to a file.
The actual request needs to return immediately saying, "We will notify you when your download is ready."
One suggestion was to use xdmp:spawn to call the XQuery in the background which would save the results to a file. My actual HTTP request could then return immediately.
For the spawn piece, I think the idea is that I run my query with different options in order to get all results instead of one page. Then I would loop through the data and create a string variable to call xdmp:save with.
Some questions, is this a good idea? Is there a better way? If I loop through the result set and it does happen to be very large (gigabytes) it could cause memory issues.
Is there no way to directly stream the results to a file in XQuery?
Note: Another idea I had was to intercept the request at the proxy (node) layer and then do an xdmp:estimate to get the record count and then loop through querying each page and flushing it to disk. In this case I would need to find some way to return my request immediately yet process in the background in node which seems to have some ideas here: http://www.pubnub.com/blog/node-background-jobs-async-processing-for-async-language/
One possible strategy would be to use a self-spawning task that, on each iteration, gets the next page of the results for a query.
Instead of saving the results directly to a file, however, you might want to consider using xdmp:http-post() to send each page to a server:
http://docs.marklogic.com/xdmp:http-post?q=xdmp:http-post&v=8.0&api=true
In particular, the server could be a Node.js server that appends each page as it arrives to a file or any other datasink.
That way, Node.js could handle the long-running asynchronous IO with minimal load on the database server.
When a self-spawned task hits the end of the query, it can again use an HTTP request to notify Node.js to close the file and report that the export is finished.
Hping that helps,
So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
How is real time autocomplete with prefix matching implemented in Quora ?
Since Solr and Sphinx doesn't support real-time updating so what changes were made to support real time updating?
Looks like it's done using javascript and jquery. I grabbed a few key lines from the minified script on the Quora homepage that I think support this theory:
Here's an ajax call to a resource providing JSON data:
$.ajax({type:"GET",url:this.resultsQueryPath,dataType:"json",data:a,success:this.fnbind(ƒ(a){this.ajaxCallback(a)}),error:this.fnbind(ƒ(a,b,c){console.log(b,c),this.requestOutstanding=!1,this.$("##results_shell").html("Could not retrieve results: "+b)})})}
note that the successful result gets put into the "a" variable. Then later here's the autocompletion based on the keydown of the "question_box" element which is completing from the parent of "a"
this.$ ("##item input.question_box").keydown (ƒ (b) {
if (b.keyCode==9&&!b.shiftKey)for (var c=e.getLiveDomId (a.cid),d=a.parent ().orderedVisibleChildren (),f\^M=0;f<d.length-1;++f)if (c==d [f]) {
$ (this).blur (),$ ("#"+d [f+1]+" input.question_box").focus ();return!1}
})
I think this is pretty incontrovertible, but it would still be nice to have the un-minified script to compare. For instance I can't see where resultsQueryPath comes from (I can't locate it's source, may be intentionally obfuscated).