The use case I'm trying to fulfill:
Admin adds SVG along with new content in CMS, specifying in the CMS which svg goes with which content
CMS commits change to git (Netlify CMS)
Static site builds again
SVG is added inline so that it can be styled and/or animated according to the component in which it occurs
Now - I can't figure out a clean way to add the SVG inline. My logic tells me - everything is available at build time (the svgs are in repo), so I should be able to simply inline the svgs. But I don't know how to generically tell React about an svg based on variables coming from the CMS content. I can import the svg directly using svgr/weback, but then I need to know the file name while coding, which I don't since it's coming from the CMS. I can load the svg using fs.readFileSync, but then the SVG gets lost when react executes client-side.
I added my current solution as an answer, but it's very hacky. Please tell me there's a better way to do this with react!
Here is my current solution, but it's randomly buggy in dev mode and doesn't seem to play well with next.js <Link /> prefetching (I still need to debug this):
I. Server-Side Rendering
Read SVG file path from CMS data (Markdown files)
Load SVG using fs.readFileSync()
Sanitize and add the SVG in React
II. Client-Side Rendering
Initial Get:/URL response contains the SVGs (ssr worked as intended)
Read the SVGs out of the DOM using HTMLElement.outerHTML
When React wants to render the SVG which it doesn't have, pass it the SVG from the DOM
Here is the code.
import reactParse from "html-react-parser";
import DOMPurify from "isomorphic-dompurify";
import * as fs from "fs";
const svgs = {}; // { urlPath: svgCode }
const createServerSide = (urlPath) => {
let path = "./public" + urlPath;
let svgCode = DOMPurify.sanitize(fs.readFileSync(path));
// add id to find the SVG client-side
// the Unique identifier is the filepath for the svg in the git repository
svgCode = svgCode.replace("<svg", `<svg id="${urlPath}"`);
svgs[urlPath] = svgCode;
};
const readClientSide = (urlPath) => {
let svgElement = document.getElementById(urlPath);
let svgCode = svgElement.outerHTML;
svgs[urlPath] = svgCode;
};
const registerSVG = (urlPath) => {
if (typeof window === "undefined") {
createServerSide(urlPath);
} else {
readClientSide(urlPath);
}
return true;
};
const inlineSVGFromCMS = (urlPath) => {
if (!svgs[urlPath]) {
registerSVG(urlPath);
}
return reactParse(svgs[urlPath]);
};
export default inlineSVGFromCMS;
Related
We are a team of 5 developers working on a video rendering implementation. This implementation consists out of two parts.
A live video preview in the browser using angular + konva.
A node.js (node 14) serverless (AWS lambda container) implementation using konva-node that pipes frames to ffmpeg for rendering a mp4 video in higher quality for later download.
Both ways are working for us. Now we extracted the parts of the animation that are the same for frontend and backend implementation to an internal library. We imported them in BE and FE. That also works nicely for most parts.
We noticed here that konva-node is deprecated since a short time. Documentation says to use canvas + konva instead on node.js. But this just doesn't work. If we don't use konva-node we cannot create a stage without a 'container' value. Also we cannot create a raw image buffer anymore, because stage.toCanvas() actually returns a HTMLCanvas, which does not have this functionality.
So what does konva-node actually do to konva API?
Is node.js still supported after deprecation of konva-node?
How can we get toBuffer() and new Stage() functionality without konva-node in node.js?
backend (konva-node)
import konvaNode = require('konva-node');
this.stage = new konvaNode.Stage({
width: stageSize.width,
height: stageSize.height
});
// [draw stuff on stage here]
// create raw frames to pipe to ffmpeg
const frame = await this.stage.toCanvas();
const buffer: Buffer = frame.toBuffer('raw');
frontend (konva)
import Konva from 'konva';
this.stage = new Konva.Stage({
width: stageSize.width,
height: stageSize.height,
// connect stage to html element in browser
container: 'container'
});
// [draw stuff on stage here]
Finally in an ideal world (if we could just Konva in frontend and backend without konva-node the following should be possible for a shared code.
loading images
public static loadKonvaImage(element, canvas): Promise<any> {
return new Promise(resolve => {
let image;
if (canvas) {
// node.js canvas image
image = new canvas.Image();
} else {
// html browser image
image = new Image();
}
image.src = element.url;
image.onload = function () {
const konvaImage = new Konva.Image(
{image, element.width, element.height});
konvaImage.cache();
resolve(konvaImage);
};
});
}
Many props to the developer for the good work. We would look forward to use the library for a long time, but how can we if some core functionality that we rely on is outdated shortly after we started the project?
Another stack overflow answer mentioned Konva.isBrowser = false;. Maybe this is used to differentiate between a browser and a node canvas?
So what does konva-node actually do to konva API?
It slightly patches Konva code to use canvas nodejs library to use 2d canvas API. So, Konva will not use browser DOM API.
Is node.js still supported after deprecation of konva-node?
Yes. https://github.com/konvajs/konva#4-nodejs-env
How can we get toBuffer() and new Stage() functionality without konva-node in node.js?
You can try to use this:
const canvas = layer.getNativeCanvasElement();
const buffer = canvas.toBuffer();
We have solved the problems we had the following way:
create stage (shared between Be+FE)
public static createStage(stageWidth: number, stageHeight: number, canvas?: any): Konva.Stage {
const stage = new Konva.Stage({
width: stageWidth,
height: stageHeight,
container: canvas ? canvas : 'container'
});
return stage;
}
create raw image buffer (BE)
const frame: any = await this.stage.toCanvas();
const buffer: Buffer = frame.toBuffer('raw');
loading images (shared between Be+FE)
public static loadKonvaImage(element, canvas?: any): Promise<Konva.Image> {
return new Promise(resolve => {
const image = canvas ? new canvas.Image() : new Image();
image.src = element.url;
image.onload = function () {
const konvaImage = new Konva.Image(
{image, element.width, element.height});
konvaImage.cache();
resolve(konvaImage);
};
});
}
Two things we had to do.
We have rewritten our whole backend and library code to use ESM modules and we got rid of konva-node and konva 7 in general.
We defined the node module canvas in all places as any. It seems like Konva accepts more inputs than expected and like specified in the type interfaces of the classes. canvas is only installed in the backend and inserted in some library methods like shown above.
#lavrton nice to hear from you. Your answer might also work for getting the Buffer, but you didn't answer on how to create the stage. Luckily we found a solution for both issues.
I'm working on an app using react-rails / webpacker, with a rails server and React frontend. On top of this, we are also using styled-components and have overwritten the existing ReactRailsUJS.serverRender method in the app/javascript/packs/server_rendering.js file to account for styled components as the following (see https://github.com/reactjs/react-rails/issues/864#issue-291728172 for more info):
// server_rendering.js
import React from 'react';
import ReactDOMServer from 'react-dom/server';
import { ServerStyleSheet } from 'styled-components';
const componentRequireContext = require.context('components', true);
const ReactRailsUJS = require('react_ujs');
ReactRailsUJS.useContext(componentRequireContext);
ReactRailsUJS.serverRender = function(renderFunction, componentName, props) {
const ComponentConstructor = this.getConstructor(componentName);
const stylesheet = new ServerStyleSheet();
const wrappedElement = stylesheet.collectStyles(
<ComponentConstructor {...props} />
);
const text = ReactDOMServer[renderFunction](wrappedElement);
// prepend the style tags to the component HTML
return `${stylesheet.getStyleTags()}${text}`;
}
This all works well, since the renderFunction param passed as the first argument of the ReactRailsUJS.serverRender function is react-dom/server's renderToString method. However, we would like to update the application to use the renderToNodeStream method for rendering our react components, which is what brings me to this discussion.
I was wondering if there is anyone out there with some more in depth knowledge of how these libraries work at a more core level to help me figure out how we might be able to use renderToNodeStream, given our application setup.
Any advice / direction is appreciated, and I can provide additional information if necessary. Thank you for the consideration and help!
I have a marko website where I have some dynamic components being called via a for loop:
/pages/note/index.marko
import layout from "../../layouts/base"
<${layout} title="test">
<for|card| of=input.cards>
<${card} />
</for>
</>
this is given a set of "notes" (just other marko files with the content) that I want to fill the page with dynamically based on the request (this is handled in the server just fine). It is loading these notes fine.
However, when I have the card marko file use a component, the component ony half works.
note1/index.marko
<math>5x+1=11</math>
math/index.marko
class {
onCreate() {
console.log("CREATED") // runs
}
onMount() {
console.log("MOUNTED") // doesn't run
// eventually I plan to run some math rendering code here
}
}
<span><${input.renderBody} /></span>
The issue is that the browser side of things never run. Also, I am getting this inexplicable error in the browser
edit: changed the rendering in the routing. somehow the error went away
routes.js
...
app.get("/note.html", async (req, res, next) => {
let title = req.query.title || "" // get the requested card
let dependencies = request(`./notes/${title}/dependencies.json`) || [] // get all of the linked cards to the requested card
let cards = [title, ...dependencies].map(note => request(`./notes/${note}`)) // get the marko elements for each card
// by this point, "cards" is a list with marko templates from the /notes/ directory
// render
let page = request(`./pages/note`, next)
let out = page.render({"title": title, "cards": cards}, res)
}
...
My file structure is set up like this:
server.js
routes.js
pages/
note/
index.marko
notes/
note1/
index.marko
note2...
components/
math/
index.marko
layouts/
base/
index.marko
Using: node, express, marko, & lasso.
Your custom tag of <math> is colliding with the native MathML <math> element, which is why you’re getting that error only in the browser.
Try naming it something else, like <Math> or <my-math>.
Strange problem
I'm dynamically including the path to an image in the source directory.
Putting the image directory in as a string works Fine (the part I commented out), but as soon as I put it in a variable it gives me the error " Cannot find module ".""
var imageDir="assets/img/MyImage.png";
--Working // const imageData= require('assets/img/MyImage.png');
--Not Working const imageData= require(imageDir);
Any one know why?
Same-ish problem Here
No answer unfortunately
Webpack needs to know what files to bundle during compile-time, but the real path value for expression(variable) only be given in runtime, you need require.context:
/* If the structure is like:
src -
|
-- index.js (where these codes are deployed)
|
-- assets -
|
--img
*/
let assetsPath = require.context('./assets/img', false, /\.(png|jpe?g|svg)$/);
// See where are the images after bundling.
// console.log(assetsPath('./MyImage.png'));
// You can put all the images you want in './assets/img', and access it.
var newElement = {
"id": doc.id,
"background": assetsPath('./MyImage.png');
};
If you wish use image on your react web application you can use next code
when use directly in html tag, but if you can use in part of java script, you must use const image = require('../Assets/image/03.jpg') and call letter this constant like this {image} between example tags
I have an Electron app that loads URL from PHP server. And the page contains an iFrame having a source to PDF. The PDF page seems absolutely ok in a normal web browser but asks for download in Electron. Any help?
My codes for html page is
<h1>Hello World!</h1>
Some html content here...
<iframe src="http://mozilla.github.io/pdf.js/web/compressed.tracemonkey-pldi-09.pdf" width="1200" height="800"></iframe>
And my js code is something like
mainWindow = new BrowserWindow({width: 800, height: 600})
mainWindow.loadURL(url.format({
pathname: path.join(__dirname, 'index.html'),
protocol: 'file:',
slashes: true
}))
app.on('ready', createWindow)
Any help would be really greatful...
Electron is shipping already with an integrated PDF viewer.
So you can load PDF files just like normal HTML files, the PDF viewer will automatically show up.
E.g. in BrowserWindow with .loadURL(…), in <iframes>, <object> and also with the, at the moment discouraged, <webview>.
PS: The need to enable the plugins property in the BrowserWindow or <webview> is no more needed since Electron 9.
You will need
https://github.com/gerhardberger/electron-pdf-window
Example:
const { app } = require('electron')
const PDFWindow = require('electron-pdf-window')
app.on('ready', () => {
const win = new PDFWindow({
width: 800,
height: 600
})
win.loadURL('http://mozilla.github.io/pdf.js/web/compressed.tracemonkey-pldi-09.pdf')
})
This answer will focus on implementation with Angular.
After year of waiting (to be solved by the Electron) finally I decided to apply a workaround. For the people who needs it done, here it goes. Workaround comes with a cost of increasing bundle size totally 500K. (For Angular)
Workaround to use Mozilla PDF.js library.
NPM
GitHub
Implementation 1 (Setting nodeIntegration: true)
This implementation has no issue, you can implement by the document of the library mentioned. But if you run into additional problem like creating white window when route is changed, it is due to the setting nodeIntegration property to true. If so, use the following implementation.
Implementation 2 (Setting nodeIntegration: false)
This is the default by Electron. Using this configuration and viewing the PDF is bit tricky. Solution is to use Uint8Array instead of a blob or base64.
You can use the following function to convert base64 to Uint8Array.
base64ToArrayBuffer(data): Uint8Array {
const input = data.substring(data.indexOf(',') + 1);
const binaryString = window.atob(input ? input : data);
const binaryLen = binaryString.length;
const bytes = new Uint8Array(binaryLen);
for (let i = 0; i < binaryLen; i++) {
const ascii = binaryString.charCodeAt(i);
bytes[i] = ascii;
}
return bytes;
}
Or convert blob to array buffer
const blob = response;
let arrayBuffer = null;
arrayBuffer = await new Response(blob).arrayBuffer();
then pass the generated Uint8Array as the pdfSource to the ng2-pdfjs-viewer.
HTML
<ng2-pdfjs-viewer zoom="100" [pdfSrc]="pdfSource"></ng2-pdfjs-viewer>
Electron 9.0.0 has enabled PDF viewer already.
npm install electron#9.0.0