How to change Netlify CMS for Strapi CMS? - frontend

I'm new to this frontend world, I have some knowledge on React and GraphQL and that's why I've decided to try and implement a test blog with Gatsby, as it seems pretty popular and easy to use.
I also wanted to get my hands into Material UI so I'm using this Gatsby starter : https://www.gatsbyjs.org/starters/Vagr9K/gatsby-material-starter
This starter seems to have included the integration with Netlify CMS, so I wanted to change that and start using Strapi CMS, so I can have the content there.
Any idea on how to do this?

There's a lot of stuff in your question, I'll try to answer it step by step if not, please let me know if you need more details of how to create pages, etc and I will update my answer to add more details if needed.
If you want to change your source from Netlify to Strapi you need to set it up in your gatsby-config.js, replacing gatsby-plugin-netlify-cms plugin for something like that:
{
resolve: `gatsby-source-strapi`,
options: {
apiURL: `http://localhost:1337`,
queryLimit: 1000, // Default to 100
contentTypes: [`article`, `user`],
//If using single types place them in this array.
singleTypes: [`home-page`, `contact`],
// Possibility to login with a strapi user, when content types are not publically available (optional).
loginData: {
identifier: "",
password: "",
},
},
},
Note that you'll have to install your desired plugins and remove the unnecessary in order to reduce the bundle package and improve performance when using starters.
The next step is to create pages from your source CMS (articles, posts, pages, etc) using GraphQL. Maybe this blog helps you. But as a short summary, you need to create queries in your gatsby-node.js to retrieve data from Strapi CMS and create pages using Gatsby's API.
The idea is the same as from your starters, however, instead of using gatsby-source-filesystem and using allMarkdownRemark in your page creation, you will use the object provided by Strapi CMS. You can check the queries and the available objects using gatsby develop and entering to localhost:8000/___graphql.
Keep in mind that you will always query static data (i.e: pre-downloaded data) from your multiple sources so when you run the develop command, the data is downloaded and accessible via GraphQL.
You can check for further information in its starter repository.

Related

Contentful Content Migration API try/catch on field creation?

We are trying to setup a workflow for delivering content model changes to our other environments (stage & prod).
Right now, our approach is this:
Create a new contentful field as a migration script using Contentful CLI.
Run script in local dev to make sure the result is as desired using contentful space migration migrations/2023-01-12-add-field.ts
Add script to GIT in folder migrations/[date]-[description].js
When release to prod, run all scripts in the migrations folder, in order, as part of the build process.
When folder contains "too many" scripts, and we are certain all changes are applied to all envs, manually remove the scripts from GIT and start over in an empty folder.
Where it fails
But, between point 4 & 5 there will be cases where a script has already been run in an earlier release, and that throws an error:
I would like the scripts to continue more gracefully without throwing an error, but I cant find support for it in the space migration docs. I have tried wrapping the code in try/catch without any luck.
Contentful recommends using the Content Migration API in favour for Content Management API since it is faster. I guess we could use the Content Management API, but at the same time we want to use "best practise".
Are we missing something here?
I would like to do something like:
module.exports = function (migration) {
// Create a new category field in the blog post content type.
const blogPost = migration.editContentType('blogPost')
if (blogPost.fieldExists('testField')) {
console.log('Field already exists')
} else {
blogPost.createField('testField').name('Test field').type('Symbol')
}
}

How to handle Contentful content data in Gatsby

I'm interested in using Gatsby to build a Netlify static site using content from Contentful
Netlify has this nice gettting started Gatsby guide:
​​https://www.netlify.com/blog/2016/02/24/a-step-by-step-guide-gatsby-on-netlify
But I'm a bit unsure of how to bring Contentful into the mix. Do I need to write scripts to convert my Contentful content into Gatsby 'markdown'?
Any ideas, ideas, links appreciated!
Since this question was posted, an official Contentful plugin's been added to Gatsby's collection (official as in created by Gatsby team, not Contentful): https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-source-contentful
An example site's src code is here: https://github.com/gatsbyjs/gatsby/tree/master/examples/using-contentful
The plugin processes markdown via gatsby-transformer-remark and produces the resultant HTML, which you can access via Gatsby's GraphQL server w/ a query like this one from the example proj:
contentfulProduct(id: { eq: $id }) {
productName {
productName
}
productDescription {
childMarkdownRemark {
html
}
}
price
}
You can use the plugin to connect both to the Content API (for published assets/content) and/or the Preview API (for both published and draft content/assets).
We use NODE_ENV to pull from the Preview API in dev and the Content API in production.
Here's the script I am using to pull down data from Contentful:
https://gist.github.com/ivanoats/e79ebbd711831be2536d1650890055c4
I run this via an npm run script before gatsby build.
I would love to work on a plugin or get ideas on better architecture for this process.
I wrote a post on this architecture in more detail on the Aerobatic blog
Right now the best option is to write a script which syncs content from Contentful to your Gatsby site's pages directory.
There's plans however for adding support within Gatsby to make this happen semi-automatically. Early days still here! See this issue for more https://github.com/gatsbyjs/gatsby/issues/324

How should I use Swagger with Hapi?

I have a working ordinary Hapi application that I'm planning to migrate to Swagger. I installed swagger-node using the official instructions, and chose Hapi when executing 'swagger project create'. However, I'm now confused because there seem to be several libraries for integrating swagger-node and hapi:
hapi-swagger: the most popular one
hapi-swaggered: somewhat popular
swagger-hapi: unpopular and not that active but used by the official Swagger Node.js library (i.e. swagger-node) as default for Hapi projects
I though swagger-hapi was the "official" approach, until I tried to find information on how do various configurations on Hapi routes (e.g. authorization, scoping, etc.). It seems also that the approaches are fundamentally different, swagger-hapi taking Swagger definition as input and generating the routes automatically, whereas hapi-swagger and hapi-swaggered seem to have similar approach with each other by only generating Swagger API documentation from plain old Hapi route definitions.
Considering the amount of contributors and the number of downloads, hapi-swagger seems to be the way to go, but I'm unsure on how to proceed. Is there an "official" Swagger way to set up Hapi, and if there is, how do I set up authentication (preferably by using hapi-auth-jwt2, or other similar JWT solution) and authorization?
EDIT: I also found swaggerize-hapi, which seems to be maintained by PayPal's open source kraken.js team, which indicates that it might have some kind of corporate backing (always a good thing). swaggerize-hapi seems to be very similar to hapi-swagger, although the latter seems to provide more out-of-the-box functionality (mainly Swagger Editor).
Edit: Point 3. from your question and understanding what swagger-hapi actually does is very important. It does not directly serves the swagger-ui html. It is not intended to, but it is enabling the whole swagger idea (which the other projects in points 1. and 2. are actually a bit reversing). Please see below.
It turns out that when you are using swagger-node and swagger-hapi you do not need all the rest of the packages you mentioned, except for using swagger-ui directly (which is used by all the others anyways - they are wrapping it in their dependencies)
I want to share my understanding so far in this hapi/swagger puzzle, hope that these 8 hours that I spent can help others as well.
Libraries like hapi-swaggered, hapi-swaggered-ui, also hapi-swagger - All of them follow the same approach - that might be described like that:
You document your API while you are defining your routes
They are somewhat sitting aside from the main idea of swagger-node and the boilerplate hello_world project created with swagger-cli, which you mentioned that you use.
While swagger-node and swagger-hapi (NOTE that its different from hapi-swagger) are saying:
You define all your API documentation and routes **in a single centralized place - swagger.yaml**
and then you just focus on writing controller logic. The boilerplate project provided with swagger-cli is already exposing this centralized place swagger.yaml as json thru the /swagger endpoint.
Now, because the swagger-ui project which all the above packages are making use of for showing the UI, is just a bunch of static html - in order to use it, you have two options:
1) to self host this static html from within your app
2) to host it on a separate web app or even load the index.html directly from file system.
In both cases you just need to feed the swagger-ui with your swagger json - which as said above is already exposed by the /swagger endpoint.
The only caveat if you chose option 2) is that you need to enable cors for that end point which happened to be very easy. Just change your default.yaml, to also make use of the cors bagpipe. Please see this thread for how to do this.
As #Kitanotori said above, I also don't see the point of documenting the code programmatically. The idea of just describing everything in one place and making both the code and the documentation engine to understand it, is great.
We have used Inert, Vision, hapi-swagger.
server.ts
import * as Inert from '#hapi/inert';
import * as Vision from '#hapi/vision';
import Swagger from './plugins/swagger';
...
...
// hapi server setup
...
const plugins: any[] = [Inert, Vision, Swagger];
await server.register(plugins);
...
// other setup
./plugins/swagger
import * as HapiSwagger from 'hapi-swagger';
import * as Package from '../../package.json';
const swaggerOptions: HapiSwagger.RegisterOptions = {
info: {
title: 'Some title',
version: Package.version
}
};
export default {
plugin: HapiSwagger,
options: swaggerOptions
};
We are using Inert, Vision and hapi-swagger to build and host swagger documentation.
We load those plugins in exactly this order, do not configure Inert or Vision and only set basic properties like title in the hapi-swagger config.

What is a recommended way to get data into your meteor template from the front-end for a famous surface?

I've been following along with the book Discover Meteor from https://www.discovermeteor.com/ and I have built the tutorial project called 'Microscope'
This uses iron-router and Meteor templating system to render out the front-end. I want to redo this project using famo.us for the front-end but I am unclear on how I to do so.
I am aware of a package called famono. mrt add famono. Using this package I can integrate famo.us and draw surface to the screen in a meteor project. It also allows you to render templates to the screen.
But I am confused on how to redo the project so the router - routes to render a famous surface with the data.
Also I am wondering if the templates will still be reactive.
If someone could provide insight on how to redo the 'Microscope' project to use famo.us on the front-end I would greatly appreciate it!
Thanks
UPDATE (to be more specific)
I have been trying to figure out how to integrate famous with templates and routing, and I have no clue how to do it.
I use iron-router to handle my routing which chooses which template and data to render like so:
Router.map ->
#route 'posts',
path: '/',
data: ->
Posts.findOne()
So this will load up the posts template with Posts.findOne() data.
But I know with famous I can generate surfaces from templates on the front end like so:
background = new Surface
template: Template.post
data: ??? (Posts.findOne()) ???
mainContext.add(background)
Because javascript is what is going to load the final template into the view, what is the recommended way for me to get the data for that template, should I query the database from the front-end by setting up special subscriptions?
Typically I render the data into the page from the router on the server but...
with famous, I just have to load the main template and let famous load the rest of the templates. The only thing left is getting the data for the other templates. What is recommended?
I would start by looking at https://github.com/gadicc/meteor-famous-components/. That package will do all the work for you if you want.
I have never used the Surface template argument but I believe that is a one time load and will not update on data invalidation (data change).
Or u can take a look at working examples )
https://github.com/sayawan?tab=repositories

What's a recommended way to put CouchDB views under source control?

I'm writing a node CRUD app that requires a few CouchDB views (I'm using express and cradle).
I've got the node app itself controlled with git, but my DB views are currently uncontrolled.
What's the recommended way to put these under source control? I don't want to put the entire database (including data) under source control.
Take a look at couchapp, http://couchapp.org/. You can use that to push your version-controlled design docs to a database.
Maybe useful: also CouchApp may push some docs in db. For example, doc(s) of configure or demo. For that put file in folder '_docs' (the same level with 'lists', 'shows', etc.) in JSON format.
File: 'any-configure.json'
{
"_id": "any-configure",
"fieldA": "...",
"fieldB": "...",
...
}
As pointed, using couchapp could make it easier to work with design documents.
I have implemented a similar approach in a Java project, an example here and the class that manages these documents.

Resources