After parsing a web site I end up with a data set that needs to be read and updated by a human. I can potentially save the data to any DB but I prefer mongoDB. To read and update the data I can reinvent the wheel and develop a Rest API in node, that uses the mongoDB driver and then develop a front end client with an html form for the user to read and update each document, but since it is 2017, is it there a complete framework that facilitates all this work? or do I need to go nodejs+express+mongo+Rest Client+custom html?
Your use case seems to be too specific that there would actually be a complete solution somewhere. I think you will have to build it yourself.
If you can speicify more information about what the data is like and what needs to be done with it, it might be helpful.
Related
I am prototyping a Shopware App right now, where I want to extend the search with our search API. We already have a working plugin in the store for that.
I found those two references for hooks:
https://developer.shopware.com/docs/resources/references/app-reference/webhook-events-reference
https://developer.shopware.com/docs/resources/references/app-reference/script-reference/script-hooks-reference
Seems like there is no webhook for the search at all and just a script-hook for a finished search. In the plugin, we could just extend the ProductSearchRoute and be completely flexible.
Are search extension not planned right now?
Cheers,
Tobias
I assume you want to alter the criteria for fetching the products. As of today this is not yet possible with non-self-hosted apps. You could use the app scripts to enrich or replace the contents of an already loaded page as you already mentioned. Obviously that comes with some drawbacks regarding performance. The capabilities of apps are being enhanced continuously though so there's chance search manipulation might become possible rather soon.
I was wondering if there are modules, or code snippets to create a program that connects to the Microsoft store, in the background, and download an app(without pyAutoGUI). Thanks in advance.
There are no Official API's for accessing Microsoft store, A Possible solution is to use requests library or other equivalent to create a bot that can access various fields available on the website and navigate through, if you always want a certain(Same) application to be downloaded may be you can go directly to that links page and use get request for the download button as submit, this should work in theory, but again this will also keep breaking in short durations as Microsoft keeps making changes to it's website.
P.S. You might want to fool the website by adding headers to your request.
I have a project I am working on which will be open source. The project records data from all over. People can look at the instructions on a website, build the prototype, and the prototype will contribute its data to a giant database.
data.sparkfun.com/ is perfect for this, except for a major problem. In order to push data, you need to put the Private Key in the code (docs). This will also allow anybody with the code (which will be everybody looking at my project because it is open source) to edit, modify, and delete data from the database because they have the Private Key.
Is there any alternative to data.sparkfun for free so that I can achieve this? I am using NodeJS as the main language for my project.
EDIT: I also do not have a server to host my own database on. I also would need a hosting service (which is why data.sparkfun is so close to what I need).
I think something like this project http://docs.dat-data.com/ could work. It is built with Node. Its not necessarily perfect because its a bit more oriented towards files than a database, but you may be able to adapt it to your application. It uses cryptography and versioning for security.
I found something called Firebase which is hosted by Google. It has a full suite of databases, filesystems, and a way to identify users. You need an API Key for posting and retrieving data, but it is not meant to be private. It also allows you to set rules to disallow editing and deleting data, unless you own the database.
It supports iOS apps, Android Apps, and Web Apps (easy to integrate with NodeJS). The free version has some limitations, but enough for most hobbyists.
Recently I started playing with node.js.
Since I'm developing a web app, I was interested in the benefits of google's javascript V8 engine, i started reading stuff about all this - and node.js.
An example of a webapp which uses node.js: http://bodesigns.com/simple-web-app-using-node-js/
As you can see, it uses node.js to connect to a mysql database. Some questions about this:
- First of all, is this safe? I mean: the username and password are stored in the file. I know, in PHP it is to, but a PHP-file is server-side. Node.js is server-side?
Second, with this (i'm also building a part of the web app with google maps stuff), i could replace the PHP-code I have now to collect some data from a mysql database? What are the (dis)advantages of replacing the PHP-code with node.js code?
Last: can I run a node.js server asynchronously? I mean: i have a HTML-page with a link (). When i click it, it must "run" the node.js-script, which collects data form the mysql-database. Like in the example. So when the page loads, only an empty map has to be visible. If you click something, markers have to show or hide. Is this possible?
My apologies for my bad English.
First of all, is this save?
node.js runs on the server, the code doesn't need to go to the client so it's safe.
What are the (dis)advantages of replacing the PHP-code with node.js code?
Do you need to rewrite code that already works? If your writing new code then PHP vs node.js is a completely different and application specific discussion. For generic applications they are both suitable
Is this possible?
Yes, however it's also possible with PHP.
I am looking into the simplest way to integrate Wikipedia into a node.js app.
The requirements are to be able to search for entries and find entities in each entry.
Any known existing libs/methods for that?
Thanks
There's a newly available open source parser for wiki text (http://sweble.org/) that might be useful to you if you roll your own solution. Of course that would require you downloading the wikipedia data dump, parsing, and storing entities in a db.
You could also look at dbpedia (http://dbpedia.org/About), though that would require integrating the rdf stack into your app (either running a local rdf repository or communicating with the often flaky online version via sparql).
One easy approach is to use a search engine api and restrict to site:wikipedia.org - e.g:
http://www.google.com/search?q=node.js+site%3Awikipedia.org
I've found that can work really well.
Spider for scraping using jquery is fantastic:
https://github.com/mikeal/spider
Mikeal is the man
Presumably you'd be using this for a side (personal) project though. Not sure how kosher it is to run wild on wikipedia with a scraper.