is it possible to use the gatsby static site generator with reactrb? - opalrb

So, I'm basically still pretty new to the whole npm/react.js (let alone react.rb) ecosystem, and I'm wondering if it would be possible to use reactrb with the gatsby static site generator.
I've been attempting to get opal/reactrb support through opal-webpack, but have been running into some issues (see this issue for some backstory https://github.com/cj/opal-webpack/issues/36). Specifically where I got stuck was trying to get it to play nice with bundler.
Is combining reactrb components with gatsby something that's even feasible? I'm hoping the answer is yes.

Sorry for the very late response. Reactrb has been renamed ruby-hyperloop and yes, you can certainly use it with Gatsby and any static site generator. The Hyperloop website is built with Middleman for example.
The best way to integrate Hyperloop into a static site generator is by using Hyperloop.JS https://github.com/ruby-hyperloop/hyperloop-js which has not server footprint at all.
Please see the Hyperloop website for examples and tutorials: http://ruby-hyperloop.io/

You can fetch data into Gatsby form any kind of source. You need to create a source plugin. The answer from #BarrieH is accurate, but could be slightly misleading.
You cannot query directly from an external GraphQL API into a component. Gatsby works by loading all your data into its own nodes system, then you pull data from those nodes into your components. This is what allows Gatsby to compile your data to static JSON files on disk, pre-fetch data for other pages, and so on.
Here's the relevant docs:
https://www.gatsbyjs.org/docs/create-source-plugin/

Related

Basic use of server side API and passing server side variable to client side

I've just started my IT degree and I'm a beginner to the use of APIs (and forums like this) so I am truly sorry if my question is to vaguely explained or if it is just plain stupid :), on top of that I'm not a native English speaker :P. Okay, so I'm trying to use Google trends' api which I installed in my server with putty by using sudo npm install google-trends-api. (it can be found here https://www.npmjs.com/package/google-trends-api#installation) As I undestand it, this is a server side api so the scripts that I write with the methods provided for this api will not run on an explorer as normal js files do. There is an example that makes use of the API that I found on the page which is as follows
var googleTrends = requite('google-trends-api');
googleTrends.hotTrends('US')
.then(function(results){
console.log(results);
})
.catch(function(err){
console.log(err);
});
this outputs a list of 20 items on the console when I use it on node.
I would like to know if there is a way to assign those results to a variable and then use that variable in a normal javascript script inside a html file. I do not know anything about node.js and the like, and I would like to actually do some research instead of asking here but I was going to use a different approach to acquire such information but now I've had to change my plans and do not have enough time and given I consider this is a fairly easy problem to resolve (maybe?) I would really appreciate it if someone could walk me through the basics of each step. THanks :) and have a nice day.
Your question is quite broad. Node.js is Chrome's V8 engine bundled with some libraries to do I/O and networking. This enables us to JavaScript outside of the browser and to create backend services or servers in general (in your case). I hope that you are aware of this difference :)
The first thing that you have to do, is to have a look at express.js and to create a simple server. It will not be more than 20 lines of code. Then you have to enrich this with more stuff like a template engine (handlebars.js, jade etc). You have to enable the server to serve static files that will be finally your js, css and image files. Creating this simple server you will be able to serve simple html page in the first place. On top of that you should have the client side javascript that you have to write and now you can use the module above. Unfortunately, you are not able to use this module directly on a javascript file that you will write. To be able to use this module you have to transcompile this thing into javascript that browser understand*. Remember that browser does not understand the require statement and some old browsers possibly will have issues with the promises that this module is using. These are the things that should be compiled. You have to use a tool like browserify for this and the compiled file that this will extract it must be included in the scripts of your html page.
Maybe there are quite a lot of concepts that you are not aware of or you don't understand them but spend a bit of time to understand them.
P.S.: I' ve replied under the assumption that google-trends-api module does not use things that are specific to node.js like the file-system for example.

Yesod - shared types between server and client

I'm used to working with Dart, where sharing types between server and client is as simple as importing the relevant packages into your project.
Can something similar be accomplished with Yesod/Haskell? Should I use GHCJS for the client? Maybe Elm? The goal is not having to worry about the data getting mangled in transit between server and client - and also not having to write a single line of JS. :o)
I haven't been able to find any good, beginner friendly docs on how to best tackle this challenge using Haskell. I suspect I just haven't looked in the right places. Any and all help is more than welcome.
To achieve this with GHCJS you can just build your project out of three core packages in this fashion:
frontend - something based on ghcjs-dom, I like Reflex-dom
backend - use your favorite framework, I like Snap, Yesod should work just the same
shared - code shared between frontend and backend
Where frontend and backend both depend on shared of course. Frontend is compiled with GHCJS, backend with GHC.
If you would like to see a complete example I would highly recommend studying hsnippet. Take a look at WsApi.hs where a set of up and downstream messages is being defined. All the JSON instances are derived in one place and imported in both frontend and backend.
Hsnippet uses websockets. This is not a requirement of course. You could use regular XHR in your own app. The principle stays the same. You define your API and serialization instances (usually JSON) in the shared package and import the relevant modules in both frontend and backend.
Personally I also share validation code, database entity definitions generated with persistent etc. Once you set it up sharing additional stuff is mostly a copy paste to one of the shared modules and then import wherever.

How to efficiently update the API when Swagger spec file is updated? (express, nodejs)

I'm trying to setup a nodejs-express boilerplate for my new project, and this time I want to try doc-driven flow. I've checked couples of packages like swagger-node, swaggerize-express ...etc. They all provide great functionalities.
However, I don't see anything that could support incremental scaffolding when the Swagger file is updated. That means when the spec changes I have to manually check the diff and manually add/modify the new specs. That doesn't sound cool.
Does anyone could share something that is more reasonable? Thanks!!!
Edit:
After trying some frameworks, I decided to use swagger-express-middleware. This framework offers a convenient way to automatically check routes/parameters for your service.
You can use tools like swagger-maven-plugin to incrementally rebuild your server code, which means reading from your api definition and updating/building code as necessary. There are SAAS products like SwaggerHub which enable this as well, by merging code and pushing to git.

securing the source code in a node-webkit desktop application

first things first , i have seen nwsnapshot. and its not helping.
i am building an inventory management system as a desktop app using node-webkit . the project being built is using compoundjs (mvc javascript library). which have a definite folder structure (you know mvc) and multiple javascript files inside them.
the problem is nwsnapshot allows the app to have only a single snapshot file but the logic of application is spread over all the folders in different javascript files.
so how do i secure my source code before shipping it to client? Or any other work-around Or smarter way (yes, i know about obfuscating).
You can use nodewebkit command called nwsnapshot to compile the javascript code into binary which will be loaded into the app without specifying any js file
nwsnapshot --extra-code application.js application.bin
in your package.json add this:
snapshot: 'application.bin'
It really depends on what you mean by "secure".
You can obfuscate your javascript code fairly well (as well as potentially improve performance) by using the Google Closure Compiler.
I'm not aware of any off-the-shelf solutions to encrypt/decrypt your javascript, and honestly I would question the need for that.
Some people think they need to make it impossible to view their source code, because they're used to dealing with compiled languages where you only ship binaries to users. The fact is, reverse-engineering that binary code was never as difficult as some people think it is, so if there's any financial incentive, there is practically no difference between shipping source code and the traditional shipping of binaries.
Some languages have offered genuine encryption of deployed assets, such as Microsoft's SLPS. It seems to me that the market for this was so small that Microsoft gave it to a partner (just my view). The truth is that most customers are not interested in taking your source code; they're far more interested in your ability to service and support that code in an efficient manner, while they get on with their job.
You may consider to merge the JS files into one in the build process and compile it.

NoSQL/MongoDB Style Query Engine in Node.js

I've built a static website generator which more or less converts markdown documents to html pages. Documents can have tags, which are useful for discovering related documents - and thus is the requirement for a query engine.
Right now I'm using MongoDB but as the application is coded in Node.js and due to the extreme lack of MongoDB support on Node.js hosts (so far no.de is the only one that I know of which supports mongodb), as well as a static website generator having absolutely no need for data persistence, I'd like to remove MongoDB and just keep the query engine.
Are there any MongoDB/NoSQL like query engines coded natively in Node.js/javascript? Or is there a better solution I haven't thought of yet... :S
Thanks guys.
Edit: If there is no such thing, who would like to build it with me? Post a comment if so :)
I'd check out JSONSelect which uses css selectors for querying js objects
Created my own in Coffee-Script for use on the Server-Side with Node.js and Client-Side with Web-Browsers.
It supports all the same queries as MongoDb. Find it here:
https://github.com/bevry/query-engine
RavenDB has a REST API and jQuery plugin available.
For example see http://andreasohlund.net/2011/02/19/accessing-ravendb-using-jsonp/
Take a look at libgit2 which is a C library for git and also the gitteh module for node.js This will give you a node wrapper around a git library - now you can have a local git repo that saves a versioned copy of your static files and serves it up via node.js. What more could you ask for? Plus push pull from github no problem - it knows the git protocols.
I haven't built this myself but happy to help if you want to do this

Resources