Gatsby has documentation on how to setup a preview server here
Problem is it requires a server running 24/7 listening to requests, I would like to achieve the same result but in a serverless setup (AWS Lambda to be more specific) since we would need a preview very rarely.
The context here is using Gatsby with Wordpress as a headless data backend, and I want to implement a custom preview link in Wordpress for previewing posts before publishing them.
So far, there are two main setbacks :
Size, currently the size of node_modules for a Gatsby starter with Wordpress is 570mb
Speed, stateless means every preview request would be running gatsby develop again
I honestly don't know a solution for size here, not sure how to strip down packages.
As for speed, maybe there's a low level Gatsby API function to directly render a page to HTML? For example, a Node.js Lambda code could look like this (buildPageHTML is a hypothetical function I'm trying to find)
import buildPageHTML from "gatsby"
exports.handler = async function(event) {
const postID = event.queryStringParameters.postID
return buildPageHTML(`/preview_post_by_id/${postID}`)
}
Any ideas on how to go on about this?
Running Gatsby in an AWS Lambda
Try this lamba (from this beautiful tutorial) :
import { Context } from 'aws-lambda';
import { link } from 'linkfs';
import mock from 'mock-require';
import fs from 'fs';
import { tmpdir } from 'os';
import { runtimeRequire } from '#/utility/runtimeRequire.utility';
import { deployFiles } from '#/utility/deployFiles.utility';
/* -----------------------------------
*
* Variables
*
* -------------------------------- */
const tmpDir = tmpdir();
/* -----------------------------------
*
* Gatsby
*
* -------------------------------- */
function invokeGatsby(context: Context) {
const gatsby = runtimeRequire('gatsby/dist/commands/build');
gatsby({
directory: __dirname,
verbose: false,
browserslist: ['>0.25%', 'not dead'],
sitePackageJson: runtimeRequire('./package.json'),
})
.then(deployFiles)
.then(context.succeed)
.catch(context.fail);
}
/* -----------------------------------
*
* Output
*
* -------------------------------- */
function rewriteFs() {
const linkedFs = link(fs, [
[`${__dirname}/.cache`, `${tmpDir}/.cache`],
[`${__dirname}/public`, `${tmpDir}/public`],
]);
linkedFs.ReadStream = fs.ReadStream;
linkedFs.WriteStream = fs.WriteStream;
mock('fs', linkedFs);
}
/* -----------------------------------
*
* Handler
*
* -------------------------------- */
exports.handler = (event: any, context: Context) => {
rewriteFs();
invokeGatsby(context);
};
find the source here
Related
I'm using google cloud trace inside GKE. Everything appears to work fine if I import the library as an argument when running node in the docker image, however if I import the client library in my code itself, I see no traces generated
As per the documentation, google cloud trace must be the first element imported in the application. To manage this in typescript with import statements, I have moved it to a dedicated file which I them import as the first module in my index.ts file.
Here's what that looks like:
index.ts:
// Trace MUST be the first import before any external modules
import './app/services/trace';
import 'source-map-support/register';
import makeApp from './app/app';
// ...
makeApp().then(app => app.listen(port));
app/services/trace.ts:
import * as TraceAgent from '#google-cloud/trace-agent';
if (process.env.NODE_ENV === 'production') {
TraceAgent.start({
ignoreUrls: ['/livez', '/healthz', '/metrics'], // ignore internal only endpoints
ignoreMethods: ['OPTIONS'],
contextHeaderBehavior: 'ignore', // Ignore any incoming context headers. Stop outsiders forcing requests to be traced.
samplingRate: 10,
serviceContext: {
service: 'my-service-name',
version: process.env.VERSION || 'v0.0.0'
}
});
}
export default TraceAgent;
app/app.ts
import express from 'express';
// ...
export default async function makeApp() {
const app = express();
// ...
return app
}
My typescript target is configured as es2022 with module set to commonjs. And the resulting compiled index.js looks like this
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
// Trace MUST be the first import before any external modules
require("./app/services/trace");
require("source-map-support/register");
const app_1 = __importDefault(require("./app/app"));
//...
From what I understand, that means that #google-cloud/trace-agent is indeed being imported first, and while it does import fine and the start method is called, I still see no traces being generated when imported like this in my code.
Here's the simplified example I made for this question.
Let's say I want to keep a state on the server side.
components/dummy.ts
console.log('init array')
let num: number = 0
const increment: () => number = () => num++
export { increment }
Also I have two end-points, p1 and p2 that I want to share that state.
pages/api/x/p1.ts
import type { NextApiRequest, NextApiResponse } from 'next'
import { increment } from '../../../components/dummy'
export default function handler(
req: NextApiRequest,
res: NextApiResponse<number>
) {
res.status(200).json(increment())
}
pages/api/x/p2.ts
import type { NextApiRequest, NextApiResponse } from 'next'
import { increment } from '../../../components/dummy'
export default function handler(
req: NextApiRequest,
res: NextApiResponse<number>
) {
res.status(200).json(increment())
}
These are are the two APIs, and then I have some pages fetching the same using getServerSideProps
pages/x/p3.tsx
import { GetServerSideProps } from 'next'
import React from 'react'
import { increment } from '../../components/dummy'
interface CompProps {
num: number
}
const Comp: React.FC<CompProps> = ({ num }) => <>{num}</>
export default Comp
export const getServerSideProps: GetServerSideProps = async ({}) => ({
props: {
num: increment(),
},
})
pages/x/p4.tsx
import { GetServerSideProps } from 'next'
import React from 'react'
import { increment } from '../../components/dummy'
interface CompProps {
num: number
}
const Comp: React.FC<CompProps> = ({ num }) => <>{num}</>
export default Comp
export const getServerSideProps: GetServerSideProps = async ({}) => ({
props: {
num: increment(),
},
})
So basically 2 issues.
On Dev (yarn dev)
Now when I hit api/x/p1 I get a 0 then 1, 2, 3.
But now when I hit api/x/p2 I get a 0, and now p1 is also reset to this new value, from this point on, p1 and p2 are sharing the same state. I can alternate between p1 and p2 and get a constant increment.
What I want to understand here is the nature of import.
How can I avoid the code to run again with each import coming from a new end-point.
On Prod (yarn build && yarn start)
On prod it's better because api/x/p1 and api/x/p2 share the same state.
But by using GetServerSideProps api/p3 and api/p4 share the same state between themselves. But that is a different one shared by p1 and p2.
So basically using the /api routes and GetServerSideProps have their own state that's not being shared.
I couldn't reproduce you issue.
I've created a sample project in order to check the module behavior, and have confirmed that the modules are imported only once, cached, keeping state.
Check it out:
/* index.js */
import './mod1.js'
import './mod2.js'
/* mod1.js */
import { increment } from "./shared.js";
setInterval(() => {
increment()
}, 2000);
/* mod2.js */
import { increment } from "./shared.js";
setInterval(() => {
increment()
}, 2000);
/* shared.js */
let num = 0
const increment = () => {
num++
console.log(num)
}
export { increment }
node index.js
outputs:
1
2
3
4
5
6
7
8
^C
My guess: When you're running the app in development mode, NextJs compiles modules on demand, when it's needed. So, when you go to the first route, you can see in stdout (vscode console) nextjs printing logs compiling files. When it finishes, there's a feature in development mode called hot-reload which will load automagically the new compiled module into memory. Then, when you go to the second route, the other module starts compiling, and when is ready, Next Js will hot reload this new fresh module into memory. This sometimes may cause the state of the app to be reset (unload modules then reload it again). I guess this is whats happening. To confirm, you can try run the builded app (next build command). Then there will be no hot reload when running the compiled/bundle application, so no state will be changed.
I have a Webpack configuration repository to separate the application boilerplate from the Webpack configuration. This application is dependent on the repository where the Webpack configuration is. I made a binary in the Webpack repository so that, in the application, I could compile and run this configuration in development. The configuration combines a common configuration with the past environment.
Problem: The scenario is beautiful but it is not working in the part of compiling and serving the application. Apparently my configuration is OK - i isolate them and test them separately. And I'm also following the v4 documentation for NODE-API.
As I have nowhere to turn, I'm sorry if I'm not on the right platform, i am studying how to compile different configurations of the same application (boilerplate) using webpack.
Link to the code.
i appreciate some repo example...
I came across several problems pointing to the Libs of the Webpack and Webpack-dev-server packages. However, today I got what I wanted. I will share here for future reference from other users.
My goal was to be able to trigger the development or production environment from a node script. Involving the construction of multiple FrontEnd applications, which was abstracted in webpack.config.
Now i can run mycli development on shell and this will trigger the construction of the configuration for that environment.
// mycli.js
#!/usr/bin/env node
const webpack = require('webpack')
const WebpackDevServer = require('webpack-dev-server')
const webpackConfiguration = require('./webpack/webpack.config')
const args = (process.argv && process.argv.length > 2) ? process.argv.slice(2) : []
const mode = args.length > 0 ? args[0] : 'development'
const config = webpackConfiguration(mode)
/**
* Minimum webpack configuration for cli validation.
* #see {#link webpackConfiguration} to further information
*/
const minConfig = {
entry: __dirname + '/test.js',
mode,
output: {
path: '/dist',
filename: 'bundle.js'
}
}
/** #type {WebpackCompilerInstance} compiler */
const compiler = webpack(minConfig)
switch (config.mode) {
case 'development':
/**
* Recommended by documentation:
* "If you're using dev-server through the Node.js API,
* the options in devServer will be ignored. Pass the options
* as a second parameter instead."
* #see {#link https://v4.webpack.js.org/configuration/dev-server/#devserver} for further information.
* #see {#link https://github.com/webpack/webpack-dev-server/tree/master/examples/api/simple} for example
*/
const devServerConfig = config.devserver;
if (config) delete config.devserver
const devServerOptions = Object.assign({}, devServerConfig, {
open: true,
stats: {
colors: true,
},
})
const devserverCallback = (err) => {
if (err) throw err
console.log('webpack-dev-server listening...')
}
new WebpackDevServer(compiler, devServerOptions).listen(devServerConfig.port, devServerConfig.host, devserverCallback)
break;
case 'production':
const compilerCallback = (err, stats) => {
console.log(stats, err)
if (err) throw err
process.stdout.write(`Stats: \n${stats} \n`)
console.log('Compiler has finished execution.')
}
compiler.run(compilerCallback)
break;
default:
console.error('No matching mode. Try "development" or "production".')
break;
}
problem
Currently my package is developed in a way where to import it you need to decide on your target. So:
import * as myLib from 'mylib/node' // if i want to use the node implementation
import * as myLib from 'mylib/web' // if i want to use the web implementation
The functionality is identical, the implementation differs tho because they use different APIs. I want to move to a single import that will work for node and web. To do that i changed my code to detect whether its running in node or the web. This alows me to import it like this:
import * as myLib from 'mylib'
Which works. However when i go to bundle some code using mylib with webpack (as web target) it goes bonkers as it tries to bundle the nodejs implementation (fails on bundling packages like worker_threads)
webpack.config.ts:
import * as path from 'path'
import { Configuration } from 'webpack'
const config: Configuration = {
entry: './dist/index.js',
target: 'web',
output: {
filename: 'mylib.web.js',
path: path.resolve(__dirname, 'dist'),
library: 'mylib',
libraryTarget: 'umd'
},
mode: 'production'
}
export default config
question
How can i bundle such a package or write my package in a way to support both node and web and bundle correctly.
edits
To specify i merged web and node implementation in such manner:
Before:
// mylib/node
export const func = () => {
// using worker_threads here
}
// mylib/web
export const func = () => {
// using web worker here
}
After:
// mylib
export const func = () => {
if(/* am i in node test */) {
// execute the worker_threads implementation
} else {
// execute the web workers implementation
}
}
Below is my code, currently this works fine.. but i want to optimize it to not load / download some resources like (fonts, images, css, js).. I've read the api docs but i'mnot able to find the related configs.. Well, I'm using webdriverIO and phantomjs as browser..
'use strict';
var _ = require('lodash');
var webdriverio = require('webdriverio');
var cheerio = require('cheerio');
/**
* Base class for browser based crawler.
* To run this crawler you need to first run phantomJS with webdriver on localhost
* ```
* ./phantomjs --webdriver 4444
* ```
*/
class BaseWebdriverIO {
/**
* Constructor
* #param opts - webdriverio config http://webdriver.io/guide/getstarted/configuration.html
*/
constructor(opts) {
this.opts = _.defaults(opts || {}, {
desiredCapabilities: {
browserName: 'phantomjs'
}
});
}
/**
* webdriver and parse url func
* #param parseUrl
* #returns {Promise}
*/
parse(parseUrl) {
console.log("getting url", parseUrl);
return webdriverio.remote(this.opts)
.init()
.url(parseUrl)
.waitForVisible('body')
.getHTML('body', false, function(err, html) {
if (err) {
throw new Error(err);
}
this.end();
return cheerio.load(html);
});
}
}
module.exports = BaseWebdriverIO;
I'm not able to find any documentation related this.
Can anyone tell me, How can I do that?
Edit/Update: I've found a working example which optimize images to not load by using setting phantomjs.cli.args from here: https://github.com/angular/protractor/issues/150#issuecomment-128109354 Some basic settings have been configured and works fine though, this is the modified desiredCapabilities settings object:
desiredCapabilities: {
'browserName': 'phantomjs',
'phantomjs.binary.path': require('phantomjs').path,
'phantomjs.cli.args': [
'--ignore-ssl-errors=true',
'--ssl-protocol=any', // tlsv1
'--web-security=false',
'--load-images=false',
//'--debug=false',
//'--webdriver-logfile=webdriver.log',
//'--webdriver-loglevel=DEBUG',
],
javascriptEnabled: false,
logLevel: 'verbose'
}
And css/fonts optimization i 've found question raised on stack overflow How can I control PhantomJS to skip download some kind of resource? and the solution to this discussed there is something like this:
page.onResourceRequested = function(requestData, request) {
if ((/http:\/\/.+?\.css/gi).test(requestData['url']) || requestData['Content-Type'] == 'text/css') {
console.log('The url of the request is matching. Aborting: ' + requestData['url']);
// request.abort();
request.cancel();
}
};
But I 'm not able trigger this function via in webdriverIO's configs desiredCapabilities object.. i.e., onResourceRequested()..
Can anyone tell me how can i call/define this function in my WebdriverIO script capabilities or any other way? Thanks.