How do you open zlib compressed data from Node.js in Ruby? - node.js

I am using the Pako NPM package like so: pako.deflate(JSON.stringify(json), { to: 'string' }) to compress a JSON payload and am able to open it up correctly over a REST call by using JSON.parse(pako.inflate(response.body, { to: 'string' })) via another Node.js service successfully.
However, when trying to inflate and view the JSON in a Ruby environment, I'm consistently getting 'inflate': incorrect header check (Zlib::DataError) using Zlib::Inflate.inflate(resp.body). So, in general, how does one get this data viewable from a potentially different language driven micro service by using zlib (or is there another package that can do this...)?

Related

Implementing push feed in node js

I am trying to implement push feed in nodejs provided by sportradar (https://developer.sportradar.com/docs/read/american_football/NFL_v6#push-feeds)
It uses HTTP publish/subscribe model
They already provide it for java and python but tried to look into node js but not found something
I have looked into https://github.com/scripting/feedRead but that is used to read xml but here sportradar push feed is providing json
It returns json response in the following format: https://developer.sportradar.com/files/NFL_v6_Push_Clock_Example.json

openapi-request-validator Validate against yaml

Please let me know if openapi-request-validator nodejs library can be used to validate request against open api 3 spec yaml file. I had a look at express-openapi-validator, but my application does not use expressjs. My service is a lambda function (nodejs) deployed in aws.
I believe in your lambda function you can use openapi-request-validator, which already makes it's function signature very friendly to the openapi spec yaml file. What you can do:
Include the openapi spec yaml file in the zip file when deploying to AWS.
At runtime, load the yaml file and convert it into a Javascript object using some library (e.g. js-yaml).
write a simple function to do the following:
Look up the Javascript spec object based on the request path to find out related parameters, requestBody, schemas etc required by OpenAPIRequestValidator.
Transform the incoming API Gateway proxy event object (I assume it's proxy integration) into the format that validateRequest expects.
Then you will be able to new OpenAPIRequestValidator and call its validateRequest method to validate the transformed request object.

apollo graphql query an uploaded file

Apollo Server 2.0 has the ability to receive file uploads as described in this blog post.
However, all the tutorials and blog posts I found only showed how to upload a file. Nobody demonstrated how to actually retrieve the file back to display it onscreen.
Does anybody know how to properly query the file contents for display onscreen?
Also, there's the possibility that maybe there is no way of querying a file and you have to build a separate rest endpoint to retrieve the contents?
Some thoughts:
I imagine the query to be something like
query {
fetchImage(id: 'someid')
}
with the respective server-side definition
type Query {
fetchImage(id : ID!): Upload //maybe also a custom type, but how do I include the actual file contents?
}
Hint: Upload is a scalar type that apollo-server automatically adds to your type definition. It is used for the upload so I imaging it also being usable for the download/query. Please read the blog post mentioned above for more information.
The response from a GraphQL service is always serialized as a JSON object. Technically, a format other than JSON could be used in serialization but in practice only JSON is used because it meets the serialization requirements in the spec. So, the only way to send a file through GraphQL would be to convert the file into some format that's JSON-compatible. For example, you could convert a Buffer to a byte array and send that as an array of integers. You would also have to send the appropriate mime type. It would be up to the client to convert the byte array back into a usable format on receiving the response.
If you go this route, you'd have to use your own scalar or object type -- the Upload scalar does not support serialization so it will throw if you try to use it as an output type (and it's not really suitable for this sort of thing anyway).
However, while doing this is technically possible, it's also inadvisable. Serializing a larger file could cause you to run out of memory since there's no way to stream data through GraphQL (the entire response has to be in memory before it can be sent). It's much better to serve the file statically (ideally using nginx instead of Node). If your API needs to refer to the file, it can then just return the file's path.
You can do this by installing express with apollo server.
apollo-server-express
Install above package and instantiate Express object with Apollo Server as explained in package docs.
Then set the static folder using express like this
app.use("/uploads", express.static("uploads")); //Server Static files over Http
uploads is my static folder & /uploads will server get request to that path
//Now I can access static files like this
http://localhost:4000/uploads/test.jpg

How to know mime type when using Github API

I use JS lib (https://github.com/mikedeboer/node-github) to call GitHub api: https://developer.github.com/v3/repos/contents/ to get content from repo.
When the return type is 'file', I want to know it's mime-type. If I write the return content into a file on hard driver, there are lots of way to tell the mime-type.
My question is how to know the mime-type without writing it into a file on hard driver.
That information is not available through the Git API (which for performance reason, does not scan all files to determine their MIME)
That means, since you don't want to write the file and analyse it with, for instance, npm mime-type, that you will have to rely on the file name extension (with npm broofa/node-mime, or now npm mime for instance).
That is not as reliable as analyzing the stream content, but it is a possible workaround.
var mime = require('mime');
mime.lookup('/path/to/file.txt'); // => 'text/plain'
mime.lookup('file.txt'); // => 'text/plain'

Temporary File Download

Is there a service that creates basically a one-time download of a file, preferably something I can use from NodeJS?
I've done some research on FilePicker, and haven't found anything about regenerating the link it gives you for a file. There may be a way to do this with NodeJS, but I'm using Meteor at the same time so many Node things probably will conflict.
You could build it with meteor. Using meteor-router with meteorite & use server side routing to deliver the files.
You need a collection to keep track of downloaded files:
Server JS
var downloads = new Meteor.Collection("downloads");
//create a link
downloads.insert({url:"/mydownload.zip",downloaded:false})
Meteor.Router.add('/file/:id', 'GET', function(id) {
download = downloads.findOne(id);
if( download) {
if(dowload.downloaded) {
this.response.send("You've already downloaded me")
}
else
{
//I guess you could just redirect or stream the file for an extra layer of surety
this.response.redirect(download.url);
}
}
});
On the client you can use /files/{{_id}} with _id of the file from downloads the person has as the link
My recommendation would also be to add custom server-side logic to count # of uploads (or just flag a file as downloaded/not downloaded) and respond accordingly. The closest you could do with Filepicker.io would be using the security policies to restrict downloading the file to a specific time interval.
in addition to using the router package
in Meteor.startup you can add
var require = __meteor_bootstrap__.require;
fs = require( 'fs' );
the fs variable should be declared on the server only. the fs package is used by Meteor and does not need to be added separately.
once you have done this, you can create files with Meteor.uuid() as their name which makes them unique and very difficult to guess. It is also possible to delete the file after a certain amount of time by using Meteor.setTimeout
the question is: where do the files to be downloaded come from?
Solution using Heroku Cloud and NodeJS Meteor Hooks
Heroku in particular is actually great for temporary file download links: they offer a "temporary scratchpad" filesystem that is reset every time the program restarts, and each running Node server cannot see the files other instances have created.
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.
Taken from the Heroku documentation: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
Thus, any files written to the "filesystem" will be temporary.
This allows for a very easy solution to this problem: you can simply use NodeJS filesystem manipulation to create temporary files on the server, serve them once (or for a limited time), and then remove them so they cannot be downloaded again.
This in combination with something like $.download() will make a seamless experience which in turn prevents unauthorized downloads.

Resources