What's the best node module for making cURL requests? - node.js

Im looking for a node module to make a request. I see multiple different options that all seem pretty good, is there one that is considered the best, thanks.

In general the request module works very well. I've done a lot with it. It's simple, elegant, powerful, and has a strong history. You can see the docs and module here: https://github.com/request/request

Related

How do I know when to trigger a Server-Sent-Event?

What I want to do
I'd like to use Server-Sent-Events for notifications and another feature – I don't need to go into detail for the other feature, but it requires real-time updates taken from a database just like notifications do.
Possible alternatives
Websockets would be an option, but I think it might be a bit too much, as I only require a one-way channel for this. However I'm totally aware that I might be wrong here and Websockets could be the best option – feel free to school me on this. The current backend-setup is Node.js (Express.js webserver) with MongoDB.
Approaches I've seen so far
I've already seen some approaches on this matter, but the main issue here is that I'd like it to be as scalable as possible. Having a for-Loop constantly query my database is the worst approach I've seen so far. The best approach I've seen is observing the MongoDB oplog, but that one seemed a bit like a hack to me rather than a good, solid solution.
I posted a link to this question on reddit in the subreddit r/node where the user /u/PremJyotish221 told me to use Redis with PUB/SUB, and let me tell you... it works perfectly! :)
So to anyone stumbling over this with the same problem, I can absolutely recommend it. It's fast, easy, reliable, and scalable.

When would I use express-validator over #hapi/joi?

I've implemented my code base using #hapi/joi, but I've recently come across express-validator, which appears to do all the same stuff, but is a lot simpler to use. I've reviewed some of the questions here on this topic, but they're mostly dated.
I realize this may be subjective, but when would you recommend I use express-validator over #hapi/joi? Maybe express-validator just works better b/c I'm using Express, which is what this module was built for.
I'm asking b/c all the tutorials I've used make use of #hapi/joi and I can't understand why. Maybe I'm missing something obvious.
What I really like about express-validator is how easy it is to use sanitation in Express routes and barring anyone having problems w/it, I may switch permanently. #hapi/joi has this ability too, but it's harder to implement IMO.
#Gary.
hapi/joi allows you to create a blueprint for your validation, while express-validator uses the validator.js.
Obviously while working with express projects, you'll find express-validator can be used out of the box, without much configuration, and that is exactly where I will use express-validator.
When I need to set up validation for my requests, and I'm not much bothered about the level of customization and readability in my Validating params.
If you want to see a critical approach and comparison between the two, there's a great blog by 101node.io: https://101node.io/blog/javascript-validators-comparison-using-joi-vs-express-validator/#:~:text=express%2Dvalidator%20is%20a%20set,js%20middlewares%20that%20wraps%20validator.&text=Joi%20can%20be%20used%20for,library%20and%20easy%20to%20use
Happy Coding !

Best way to write a Gitlab module

I would like to extend Gitlab, mainly writing a custom dashboard. I cannot find any documentation on the 'proper/best/better' way to do this. Can someone point me in the right direction. I already tried searching
There is no way of doing that except by forking GitLab.
The only way of doing those things without forking (and thus merging afterwards...) is the API (since it is guaranteed to be stable) but it is probably overkill for your application.

Meteor server inside browser: feasibility?

Is it technically feasible to run the Meteor server-side stuff inside a browser tab?
What technical limitations of the browser environment would absolutely eliminate this possibility?
To be clear, yes, I am asking what you think I'm asking -- NodeJS in a tab with Meteor on top! :)
Sure, I'll have a go.
Is it technically feasible?
What would you need to make this happen?
node.js in the browser. It exists, kind of.
A database backend. You'd need MongoDB, also in the browser. I bet you could implement something on top of HTML5 LocalStorage, but it'd be a slog. Add to that the fact that Meteor doesn't currently support anything but MongoDB, and you're in for a world of hurt.
The magical Meteoric "glue" that makes it all work together -- in other words, the reason you're using Meteor in the first place.
If what you're really asking is
Is it a good idea?
The answer is almost certainly no.
I know several of the people who work at Meteor. This, ah, isn't on their roadmap.
That said, if you could hack it together, give them a call -- especially if you happen to be looking for a job! :)
If you're asking about how to prototype and run a Meteor app without having to install anything on your machine, you certainly aren't the first person with this idea. It's already been done:
https://www.discovermeteor.com/2013/10/04/meteor-nitrous/
In short, a hosted development platform for Meteor is probably superior to trying to cram it all in a browser anyway.

Are there any building blocks for a search engine that will scrape other sites?

I want build a search service for one particular thing. The data is freely available out there, via free classified services, and a host of other sites.
Are there any building blocks, e.g. open-source crawlers that I would customize - rather than build from scratch, that I can use?
Any advice on building such a product? Not just technical, but any privacy/legal things that I might need to take into consideration.
E.g. do I need to 'give credit' where the results are from and put a link to the original - if I get them from many places?
Edit: By the way, I am using GWT with JS for the front-end, haven't decided on the language for the back-end. Either PHP or Python. Thoughts?
There are few blocks in python you can use.
beautifulsoup [http://www.crummy.com/software/BeautifulSoup/] for parsing HTML. It can handle bad code too, and its API is veeery easy... way better than any DOM-like tool for me. My friend used it to scrape his old phpbb forum with success. It has pretty good docs.
mechanize [http://wwwsearch.sourceforge.net/mechanize/] is a webbrowser-simulating http client library. It handles cookies, filling forms and so on. Also easy to use, but it helps if you understand how does http work.
http://dev.scrapy.org/ -- this is a relatively new thing: a whole scraping framework based on twisted. I haven't played with it much.
I use first two for my needs; f.e. it needs 20 lines of code to get an automatic testing tool for a 3-stage poll, with simulation of waiting for user entering data and so on.
I made a screen-scraper in Ruby that took like five minutes. Apparently this dude has it down to 60 seconds! I'm not sure if Ruby is as scalable or fast as what you're looking for, but I've never seen a faster route to a proof-of-concept or a prototype.
The secret is a library called "hpricot", which was built for exactly this purpose.
I don't know anything about PHP or Python or what's available for those development systems/languages.
Good luck!

Resources