I want to build an API that the user can download files, but there is one problem, the files can be huge, something like more than 100GB sometimes. I'm thinking about making the API using nodejs, but I don't know if it gonna be a great idea to make the file download features using node, some users may spend more than a day to make a download, node is single thread and I'm afraid that can hold to much time and make the others request slower, or worse, block them.
I gonna use clouding computing to host this API, I gonna start to study serverless hosts to see if it worths it in my case. Do you have any idea what I should use to make the download feature? There is an open-source code to use as an example?
Related
I want to save the "state" of my application each time it is changed, and load it each time the application is booted up.
The "state" will be a simple object with a handful of variables in it, the idea is to JSON.stringify it to a file, and JSON.parse it when needed.
From what I understand, this cannot be done using Node's fs, since files on Heroku are not permanent.
I cannot use S3 either, because it's not free (free plan only lasts a year), and this is a hobby project of mine - I am not willing to pay for it.
Another recurring suggestion, is to use some sort of a database, but I think that is a waste of time, since I will only be dealing with one very small file.
Essentially, my question is, how can I achieve something that is closest to this?:
WRITE("filename.txt",JSON.stringify(x));
x=JSON.parse(READ("filename.txt"));
(P.S: I've read somewhere, can't seem to remember where, that Heroku gives free 100MB (Which would be way more than enough). What is that? Does it have anything to do with my code?)
I can think of a few ways to do this for free. They all pretty much boil down to "What free service allows me to read/write arbitrary file content, and access via an API?"…
Do you use or already pay for Dropbox (or something similar?). If so you could you the Dropbox API for Node.js to save/load your application state.
You could use the Github Gist API and just update the same Gist over and over.
Otherwise, you mentioned databases. Sure, a database would be overkill tech-wise, but given your constraints (and the fact that you can get a small db for free on Heroku), and how much overhead implementing one of the aforementioned APIs would be, it might be the best option.
Hope this helps.
Currently we are running a C# (built on Sharepoint) project and have implemented a series of automated process to help delivery, here are the details.
Continuous Integration. Typical CI system for frequent compilation and deployment in DEV environment.
Partial Package. Every week, a list of defects accompanied fixes is identified and corresponding assemblies are fetched from the full package to form a partial package. The partial package is deployed and tested in subsequent environments.
In this pipeline, there are two packages going through are being verified. Extra effort is used to build up a new system (web site, scripts, process, etc) for partial packages. However, some factors hinder its improvement.
Build and deploy time is too long. On developers' machines, every single modification on assemblies triggers around 5 to 10 minute redeployment in IIS. In addition, it takes 15 minutes (or even more) to rebuild the whole solution. (The most painful part of this project)
Geographical difference. Every final package is delivered to another office, so manual operation is inevitable and package size is preferred to be small.
I will be really grateful to have your opinions to push the Continuous Delivery practices forward. Thanks!
I imagine the reason that this question has no answers is because its scope is too large. There are far too many variables that need to be eliminated, but I'll try to help. I'm not sure of your skill level either so my apologies in advance for the basics, but I think they'll help improve and better focus your question.
Scope your problem to as narrow as possible
"Too long" is a very subjective term. I know of some larger projects that would love to see 15 minute build times. Given your question there's no way to know if you are experiencing a configuration problem or an infrastructure problem. An example of a configuration issue would be, are your projects taking full advantage of multiple cores by being built parallel /m switch? An example of an infrastructure issue would be if you're trying to move large amounts of data over a slow line using ineffective or defective hardware. It sounds like you are seeing the same times across different machines so you may want to focus on configuration.
Break down your build into "tasks" and each task into the most concise steps possible
This will do the most to help you tune your configuration and understand what you need to better orchestrate. If you are building a solution using a CI server you are probably running using a command like msbuild.exe OurProduct.sln which is the right way to get something up and running fast so there IS some feedback. But in order to optimize, this solution will need to be broken down into independent projects. If you find one project that's causing the bulk of your time sink it may indicate other issues or may just be the core project that everything else depends on. How you handle your build job dependencies is dependent up your CI server and solution. Doing it this way will create more orchestration on your end, but give faster feedback if that's what's required since you're only building the project that had the change, not the complete solution.
I'm not sure what you mean about the "geographical difference" thing. Is this a "push" to the office or a "pull" from the offices? This is a whole other question. HOW are you getting the files there? And why would that require a manual step?
Narrow your scope and do multiple questions and you will probably get better (not to mention shorter and more concise) answers.
Best!
I'm not a C# developer, but the principles remain the same.
To speed up your builds, it will be necessary to break your application up in smaller chunks if possible. If that's not possible, then you've got bigger problems to attack right now. Remember the principles of API's, components and separation of concerns. If you're not familiar with these principles, it's definitely worth the time to learn about them.
In terms of deployment - great that you've automated it, but it sounds exactly the same as you are building a big-bang deployment. Can you think of a way to deploy only deltas to the server(s), are do you deploy a single compressed file? Break it up if possible.
I wanted to know if anyone has been using AirBnB Rendr and is it stable and ok to use in commercial projects or is it still changing a lot?
I'm developing a website which can run both client and server based, this mean I need to be able to render pages and widgets server and client based.
The server is running Node.js, dust.js and has custom server based code to render the pages and widgets on the server side. I need to pick how to handle it on the client side.
Naturally I want to try and not repeat code, but obviously the client is different I can:
Keep my current page based server rendering and develop custom
client side code.
Use backbone.js on client side and keep my server based code the
same.
Use AirBnB rendr that is based on Node.js and backbone to use the
same code on client and o server. AirBnB Rendr Library
I like the 3rd idea very much, but I'm looking for some input from you guys.
Has anyone used it? any experience with it in terms of stability and/or how often their api changes etc?
I've just started playing around with Rendr. If I ignore the learning curve and oboarding friction, I like it a lot and I plan to write my next large production app using Rendr.
Unfortunately, as bababa listed above, the documentation needs a lot of work. There is an explanation of how Rendr works in its README and the example app's README but beyond that you'll need to source dive in order to figure out how the gears are turning. Currently, there is no forum for questions (other than stack overflow :D) and I've had a hard time figuring out its idioms on my own.
Despite all the struggles, I finally see the light and I'm starting to understand why Rendr is so powerful.
tl;dr - If you're willing to source dive and figure out your own workflow, I would suggest using Rendr. Otherwise, I would recommend going old school by writing a traditional client app with a more mature library. (is it too early to say that? =X)
Well given AirBnb is a successful commercial enterprise, there's some validation that the library works well enough for them. This question is probably best answered by watching their github commit log for breaking changes. Given backbone is 1.0 and essentially stable at this point, rendr will probably quickly stabilize, but honestly your fear of instability is probably unjustified. I think rendr looks compelling and although my current project is using a very similar home-grown solution, I would consider using rendr in a future project or even porting our code to rendr. "Stability" per say is much less important to the web development community compared to other situations like packaged or embedded software.
I used (tried to use) and Rendr on a project and gave up. There are just to many limitations (currently) and the lack of documentation doesn't help. I ended up need to rewrite the source code to accomplish some things I would consider trivial with other frameworks, such as passing multiple collections to a view. It just wasn't possible (at the time I used it) and that was a deal breaker. Not being able to pass a collection of categories and results to a page was to much of a limitation.
I have no doubt it will eventually be ready for production use, but right now I would say unless you are an engineer at AirBnb and know how to hack the source then no, it's not ready.
If you really want to know if it will work for your needs, take a look at the issue list on github. That will give you a good idea where the projects at.
I want to rewrite a complete community website in nodejs,express and
nowjs with mongodb. Its currently in php using the codeigniter
framework. It includes functionality such as your own profile page,
photoalbum, guestbook, internal messages, contacts and more. And im
going to add an im to it and some other things like a forum and so on.
Its a pretty big project.
I have to make a decision about which techniques to use in the
webapplication. So i did a little research and found, node, Expess and
nowjs.
Should i stick to finish the application in php( codeigniter ), mysql
and ajax, or can i do this in express, mongodb and nowjs?
Can anyone recommend this for use on a live production site? And if
so, are there any security issues one should know about? General
guidelines?
Help would be really appreciated so i can make up my mind and finish
the project
Regards
George
The problem with Nodejs being young is not that it's a half baked product or something but infact it's growing very fast and new developments are being done at breath taking place. So you need to keep up with them while developing.
Otherwise there are huge projects out there developed totally with node and express. Take a look at expressjs.com/applications to see what kind of commericial projects are built using it.
As far as security, sessions etc. are concerned. Unlike ASP/PHP , you don't get most of the features out of the box. You'll need to either write them yourselves or using open source frameworks. Both ways you and only you have to ensure that your application has all bases covered. With flexibility, comes complexity.
It should be noted that Nodejs is optimum for real time I/O. If you think this is something which is required at your end then I highly recommend to go for it.
What you describe does sound like a big project.
If you have the time to spare, I would suggest picking a small portion of it that deals with managing secure sessions (e.g. the profile page). Implement that in Express to get a sense of how it compares to the existing PHP. If you like it, keep going.
Particularly when security is at stake, always try to use existing components when they are available. Node's minimalism makes it tempting to 'roll your own,' but it's very easy to make a security mistake with anything less than expert knowledge.
I was thinking about this the other day and wanted to see what the SO community had to say about the subject.
As it stands right now Common Lisp is getting some attention as a web development platform, and with good reason (of which I'm sure you are already convinced).
I was wondering how one would go about using a library in a shared environment in a similar fashion to PHP.
If I set up something like SBCL as an interperter to interpret FASL files like Python or PHP, what would be the best way to use libraries (like clsql for instance).
Most come as asdf installable libraries, but it would be a stupid amount of overhead to require and install the library each and every time a request is made.
Keeping in mind this is for shared hosting; would it be best to ..
1) Install system wide copies of the libraries for use in applications; reduces space, but there may be problems with using the correct version of the library.
2) Allow users (through a control panel) to install local copies for themselves; more space, no version problems.
3) Tell them to wrap it into a module and load it on demand like Python does (I'm not sure if/how this can be done with Lisp). Just being able to load a library for use would be the best option, but I don't think a lot of them are designed to be used this way.
Anyways, looking to hear your opinions, thanks.
There are two ways I would look at it:
start a Lisp for each request
This way it would be much better that the Lisp is a saved image with all necessary libraries and data loaded. But that approach does not look very promising to me.
run a Lisp and let a frontend (web browser, another web server, ...) connect to it
This way you can either start a saved image or a Lisp that loads a bunch of stuff once and serves the requests.
I like to use saved images/applications in a deployment scenario. They can be quickly started, contain all the necessary software and are independent of library changes.
So it might be useful to provide pre-configured Lisp images that contain the necessary software or let the user configure and save an image.