Jest / Enzyme - How to test at different viewports? - jestjs

I am trying to run a test on a component at a certain viewport width. I am doing the following, but this doesn't seem to change it:
test('Component should do something at a certain viewport width.', () => {
global.innerWidth = 2000;
const component = mount(<SomeComponent />);
...
});
I also found an article that explains how to do it using JSDom, but as Jest now ships with JSDom, I wondered if there was a native solution.
https://www.codementor.io/pkodmad/dom-testing-react-application-jest-k4ll4f8sd

Background Information:
jsdom does not implement window.resizeBy() or window.resizeTo()
jsdom defines the window innerWidth and innerHeight to be 1024 x 768
It is possible to simulate a window resize using jsdom by manually setting window.innerWidth and window.innerHeight and firing the resize event
Here is an example:
comp.js
import * as React from 'react';
export default class Comp extends React.Component {
constructor(...args) {
super(...args);
this.state = { width: 0, height: 0 }
}
updateDimensions = () => {
this.setState({ width: window.innerWidth, height: window.innerHeight });
}
componentDidMount() {
this.updateDimensions();
window.addEventListener("resize", this.updateDimensions);
}
componentWillUnmount() {
window.removeEventListener("resize", this.updateDimensions);
}
render() {
return <div>{this.state.width} x {this.state.height}</div>;
}
}
comp.test.js
import * as React from 'react';
import { shallow } from 'enzyme';
import Comp from './comp';
const resizeWindow = (x, y) => {
window.innerWidth = x;
window.innerHeight = y;
window.dispatchEvent(new Event('resize'));
}
describe('Comp', () => {
it('should display the window size', () => {
const component = shallow(<Comp />);
expect(component.html()).toEqual('<div>1024 x 768</div>');
resizeWindow(500, 300);
expect(component.html()).toEqual('<div>500 x 300</div>');
resizeWindow(2880, 1800);
expect(component.html()).toEqual('<div>2880 x 1800</div>');
});
});
Notes:
As of Enzyme v3 shallow calls React lifecycle methods like componentDidMount() so it can be used in place of mount
This answer borrows heavily from the information here, here, here, and #JoeTidee's own answer here.

If you're using TypeScript it will complain that window.innerWidth/innerHeight are readonly.
You can get around this with either redeclaring the property:
Object.defineProperty(window, 'innerWidth', {writable: true, configurable: true, value: 105})
or using the Object.assign method:
window = Object.assign(window, { innerWidth: 105 });
Both not extremely nice solutions, but they work.

Works for me. Code is no longer marked as uncovered.
it('resize event listener changes the state', () => {
const wrapper = shallow(<Component />);
const instance = wrapper.instance();
instance.setState({
mobileMode: true
});
global.innerWidth = 800;
window.dispatchEvent(new Event('resize'));
expect(instance.state.mobileMode).toBeFalsy();
global.innerWidth = 600;
window.dispatchEvent(new Event('resize'));
expect(instance.state.mobileMode).toBeTruthy();
});
Resize listener inside my component
...
resizeListener = () => {
if (window.innerWidth < 768) {
this.setState({
mobileMode: true
});
} else {
this.setState({
mobileMode: false
});
}
};
window.addEventListener('resize', resizeListener);
...

Related

GSAP timeline needed on every page in Gatsby

My Gatsby site use the same GSAP timeline on every page, so I want to stay DRY and my idea is to include my timeline in my Layout component in that order.
But I don't know how to pass refs that I need between children and layout using forwardRef.
In short, I don't know how to handle the sectionsRef part between pages and layout.
sectionsRef is dependant of the page content (children) but is needed in the timeline living in layout.
How can I share sectionsRef between these two (I tried many things but always leading to errors)?
Here's a codesandbox without the refs in the Layout:
https://codesandbox.io/s/jolly-almeida-njt2e?file=/src/pages/index.js
And the sandbox with the refs in the layout:
https://codesandbox.io/s/pensive-varahamihira-tc45m?file=/src/pages/index.js
Here's a simplified version of my files :
Layout.js
export default function Layout({ children }) {
const containerRef = useRef(null);
const sectionsRef = useRef([]);
sectionsRef.current = [];
useEffect(() => {
gsap.registerPlugin(ScrollTrigger);
const scrollTimeline = gsap.timeline();
scrollTimeline.to(sectionsRef.current, {
x: () =>
`${-(
containerRef.current.scrollWidth -
document.documentElement.clientWidth
)}px`,
ease: 'none',
scrollTrigger: {
trigger: containerRef.current,
invalidateOnRefresh: true,
scrub: 0.5,
pin: true,
start: () => `top top`,
end: () =>
`+=${
containerRef.current.scrollWidth -
document.documentElement.clientWidth
}`,
},
});
}, [containerRef, sectionsRef]);
return (
<div className="slides-container" ref={containerRef}>
{children}
</div>
);
}
index.js (page)
import { graphql } from 'gatsby';
import React, { forwardRef } from 'react';
import SectionImage from '../components/sections/SectionImage';
import SectionIntro from '../components/sections/SectionIntro';
import SectionColumns from '../components/sections/SectionColumns';
const HomePage = ({ data: { home } }, sectionsRef) => {
const { sections } = home;
const addToRefs = (el) => {
if (el && !sectionsRef.current.includes(el)) {
sectionsRef.current.push(el);
}
};
return (
<>
{sections.map((section) => {
if (section.__typename === 'SanitySectionIntro') {
return (
<SectionIntro key={section.id} section={section} ref={addToRefs} />
);
}
if (section.__typename === 'SanitySectionImage') {
return (
<SectionImage key={section.id} section={section} ref={addToRefs} />
);
}
if (section.__typename === 'SanitySectionColumns') {
return (
<SectionColumns
key={section.id}
section={section}
ref={addToRefs}
/>
);
}
return '';
})}
</>
);
};
export default forwardRef(HomePage);
export const query = graphql`
query HomeQuery {
// ...
}
`;
Any clue greatly appreciated :)

How to apply SVG texture on OBJ file in Three.js

I'm using Threejs for a project of mine, i render an Object file using OBJ loader and it displays the object on the screen. But I don't know how to map a SVG image to that object how to apply texture on that object file.
please help me this is my current code.
I'm new to this platform and don't know much about THREE.js i've seen some examples but it didn't worked out for me.
One person on my recent post told me how to apply material on the object but it didn't worked for me.
When i apply material i got this error.
ERROR TypeError: Cannot set property 'map' of undefined
Here is my complete code file.
import { Component, AfterViewInit, ViewChild, Input, ElementRef } from '#angular/core';
import * as THREE from 'three';
import { OrbitControls } from '#avatsaev/three-orbitcontrols-ts';
import {OBJLoader} from 'three-obj-mtl-loader';
import { TextureLoader } from 'three';
#Component({
selector: 'app-scene',
templateUrl: './scene.component.html',
styleUrls: ['./scene.component.css']
})
export class SceneComponent implements AfterViewInit {
#Input() name: string;
#ViewChild('canvas', {static:true}) canvasRef: ElementRef;
renderer = new THREE.WebGLRenderer;
scene = null;
camera = null;
controls = null;
mesh = null;
light = null;
loader;
svgLoader;
private calculateAspectRatio(): number {
const height = this.canvas.clientHeight;
if (height === 0) {
return 0;
}
return this.canvas.clientWidth / this.canvas.clientHeight;
}
private get canvas(): HTMLCanvasElement {
return this.canvasRef.nativeElement;
}
constructor() {
// this.loader = new OBJLoader();
this.scene = new THREE.Scene();
this.loader = new OBJLoader();
this.camera = new THREE.PerspectiveCamera(15, window.innerWidth / window.innerHeight, 0.1, 1000)
}
ngAfterViewInit() {
this.configScene();
this.configCamera();
this.configRenderer();
this.configControls();
this.createLight();
this.createMesh();
this.animate();
}
configScene() {
// this.scene.background = new THREE.Color( 0xdddddd );
}
configCamera() {
this.camera.aspect = this.calculateAspectRatio();
this.camera.updateProjectionMatrix();
this.camera.position.set( 0, 0, 3 );
this.camera.lookAt( this.scene.position );
}
configRenderer() {
this.renderer = new THREE.WebGLRenderer({
canvas: this.canvas,
antialias: true,
alpha: true
});
this.renderer.setPixelRatio(devicePixelRatio);
// setClearColor for transparent background
// i.e. scene or canvas background shows through
this.renderer.setClearColor( 0x000000, 0 );
this.renderer.setSize((window.innerWidth/2), (window.innerHeight/2));
window.addEventListener('resize', ()=>{
this.renderer.setSize((window.innerWidth/2), (window.innerHeight)/2);
this.camera.aspect = window.innerWidth / window.innerHeight;
this.camera.updateProjectionMatrix();
})
console.log('clientWidth', this.canvas.clientWidth);
console.log('clientHeight', this.canvas.clientHeight);
}
configControls() {
this.controls = new OrbitControls(this.camera);
this.controls.autoRotate = false;
this.controls.enableZoom = false;
// this.controls.maxDistance = 5;
// this.controls.minDistance = 10;
this.controls.enablePan = false;
this.controls.update();
}
createLight() {
this.light = new THREE.PointLight( 0xffffff );
this.light.position.set( -10, 10, 10 );
this.scene.add( this.light );
}
createMesh() {
const url ='../../../../assets/abc.svg';
this.loader.load('../../../../assets/nonunified.obj', (object)=>{
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.geometry.center();
}
} );
object.material.map = new TextureLoader().load(url)
this.scene.add(object)
},
// called when loading is in progresses
function (xhr) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded' );
},
// called when loading has errors
function ( error ) {
console.log( 'An error happened' );
}
)}
animate() {
window.requestAnimationFrame(() => this.animate());
this.controls.update();
this.renderer.render(this.scene, this.camera);
}
}
You have not created a material. If you do console.log(object.material); it will show undefined. You first need to create a material. Please check the threejs doc for different materials that can be used. For this example I am using MeshPhongMaterial. So your createMesh function will look like this.
createMesh() {
const url = '../../../../assets/abc.svg';
this.loader.load('../../../../assets/nonunified.obj', (object) => {
object.traverse(function (child) {
if (child instanceof THREE.Mesh) {
child.geometry.center();
}
});
const material = new THREE.MeshPhongMaterial({
map: new TextureLoader().load(url)
});
object.material = material;
this.scene.add(object)
},
// called when loading is in progresses
function (xhr) {
console.log((xhr.loaded / xhr.total * 100) + '% loaded');
},
// called when loading has errors
function (error) {
console.log('An error happened');
}
)
}
This should work.

Lottie Animation in fabricjs canvas

Is it possible to load the Lottie animation in fabricjs canvas
I have tried the following samples
bodymovin.loadAnimation({
wrapper: animateElement, // div element
loop: true,
animType: 'canvas', // fabricjs canvas
animationData: dataValue, // AE json
rendererSettings: {
scaleMode: 'noScale',
clearCanvas: true,
progressiveLoad: false,
hideOnTransparent: true,
}
});
canvas.add(bodymovin);
canvas.renderAll();
I cant able to add the animation in the fabric js canvas. if any one overcome this issue kindly do comments on it
I might be late to answer this, but for anyone else looking, this pen could give you some pointers: https://codepen.io/shkaper/pen/oEKEgG
The idea here, first of all, is to extend fabric.Image class overriding its internal render method to render the contents of an arbitrary canvas that you yourself provide:
fabric.AEAnimation = fabric.util.createClass(fabric.Image, {
drawCacheOnCanvas: function(ctx) {
ctx.drawImage(this._AECanvas, -this.width / 2, -this.height / 2);
},
})
You can make this canvas a constructor argument, e.g.
initialize: function (AECanvas, options) {
options = options || {}
this.callSuper('initialize', AECanvas, options)
this._AECanvas = AECanvas
},
Then you'll just need to use lottie's canvas renderer to draw animation on a canvas and pass it to your new fabric.AEAnimation object.
I would assume so, by combining your code with something similar to https://itnext.io/video-element-serialization-and-deserialization-of-canvas-fc5dbf47666d. Depending on your scenario you might be able to get away with using something like http://fabricjs.com/interaction-with-objects-outside-canvas
If of any help, I've created this Lottie class with the support of exporting toObject/JSON
import { fabric } from 'fabric'
import lottie from 'lottie-web'
const Lottie = fabric.util.createClass(fabric.Image, {
type: 'lottie',
lockRotation: true,
lockSkewingX: true,
lockSkewingY: true,
srcFromAttribute: false,
initialize: function (path, options) {
if (!options.width) options.width = 480
if (!options.height) options.height = 480
this.path = path
this.tmpCanvasEl = fabric.util.createCanvasElement()
this.tmpCanvasEl.width = options.width
this.tmpCanvasEl.height = options.height
this.lottieItem = lottie.loadAnimation({
renderer: 'canvas',
loop: true,
autoplay: true,
path,
rendererSettings: {
context: this.tmpCanvasEl.getContext('2d'),
preserveAspectRatio: 'xMidYMid meet',
},
})
// this.lottieItem.addEventListener('DOMLoaded', () => {
// console.log('DOMLoaded')
// })
this.lottieItem.addEventListener('enterFrame', (e) => {
this.canvas?.requestRenderAll()
})
this.callSuper('initialize', this.tmpCanvasEl, options)
},
play: function () {
this.lottieItem.play()
},
stop: function () {
this.lottieItem.stop()
},
getSrc: function () {
return this.path
},
})
Lottie.fromObject = function (_object, callback) {
const object = fabric.util.object.clone(_object)
fabric.Image.prototype._initFilters.call(object, object.filters, function (filters) {
object.filters = filters || []
fabric.Image.prototype._initFilters.call(object, [object.resizeFilter], function (resizeFilters) {
object.resizeFilter = resizeFilters[0]
fabric.util.enlivenObjects([object.clipPath], function (enlivedProps) {
object.clipPath = enlivedProps[0]
const fabricLottie = new fabric.Lottie(object.src, object)
callback(fabricLottie, false)
})
})
})
}
Lottie.async = true
export default Lottie
To create Lottie element just pass JSON
const fabricImage = new fabric.Lottie('https://assets5.lottiefiles.com/private_files/lf30_rttpmsbc.json', {
scaleX: 0.5,
})
canvas.add(fabricImage)

How can I use Esri Arcgis Map in ReactJs Project?

I'm trying to use Esri map. To include map in my project, here is what I found:
require([
"esri/map",
"esri/dijit/Search",
"esri/dijit/LocateButton",
"esri/geometry/Point",
"esri/symbols/SimpleFillSymbol",
"esri/symbols/SimpleMarkerSymbol",
"esri/symbols/SimpleLineSymbol",
But there isn't any esri folder or npm package. Therefore, I'm confused here. How esri is imported in project?
Use esri-loader to load the required esri modules. This is a component rendering basemap.
import React, { Component } from 'react';
import { loadModules } from 'esri-loader';
const options = {
url: 'https://js.arcgis.com/4.6/'
};
const styles = {
container: {
height: '100vh',
width: '100vw'
},
mapDiv: {
padding: 0,
margin: 0,
height: '100%',
width: '100%'
},
}
class BaseMap extends Component {
constructor(props) {
super(props);
this.state = {
status: 'loading'
}
}
componentDidMount() {
loadModules(['esri/Map', 'esri/views/MapView'], options)
.then(([Map, MapView]) => {
const map = new Map({ basemap: "streets" });
const view = new MapView({
container: "viewDiv",
map,
zoom: 15,
center: [78.4867, 17.3850]
});
view.then(() => {
this.setState({
map,
view,
status: 'loaded'
});
});
})
}
renderMap() {
if(this.state.status === 'loading') {
return <div>loading</div>;
}
}
render() {
return(
<div style={styles.container}>
<div id='viewDiv' style={ styles.mapDiv } >
{this.renderMap()}
</div>
</div>
)
}
}
export default BaseMap;
This renders a base map but this is not responsive. If I remove the div around the view div or if I give the height and width of the outer div (surrounding viewDiv) as relative ({ height: '100%', width: '100%'}), the map does not render. No idea why. Any suggestions to make it responsive would be appreciated.
An alternative method to the above is the one demonstrated in esri-react-router-example. That application uses a library called esri-loader to lazy load the ArcGIS API only in components/routes where it is needed. Example:
First, install the esri-loader libary:
npm install esri-loader --save
Then import the esri-loader functions in any react module:
import * as esriLoader from 'esri-loader'
Then lazy load the ArcGIS API:
componentDidMount () {
if (!esriLoader.isLoaded()) {
// lazy load the arcgis api
const options = {
// use a specific version instead of latest 4.x
url: '//js.arcgis.com/3.18compact/'
}
esriLoader.bootstrap((err) => {
if (err) {
console.error(err)
}
// now that the arcgis api has loaded, we can create the map
this._createMap()
}, options)
} else {
// arcgis api is already loaded, just create the map
this._createMap()
}
},
Then load and the ArcGIS API's (Dojo) modules that are needed to create a map:
_createMap () {
// get item id from route params or use default
const itemId = this.props.params.itemId || '8e42e164d4174da09f61fe0d3f206641'
// require the map class
esriLoader.dojoRequire(['esri/arcgis/utils'], (arcgisUtils) => {
// create a map at a DOM node in this component
arcgisUtils.createMap(itemId, this.refs.map)
.then((response) => {
// hide the loading indicator
// and show the map title
// NOTE: this will trigger a rerender
this.setState({
mapLoaded: true,
item: response.itemInfo.item
})
})
})
}
The benefit of using esri-loader over the approach shown above is that you don't have to use the Dojo loader and toolchain to load and build your entire application. You can use the React toolchain of your choice (webpack, etc).
This blog post explains how this approach works and compares it to other (similar) approaches used in applications like esri-redux.
You don't need to import esri api like you do for ReactJS. As the react file will finally compile into a js file you need to write the esri parts as it is and mix the ReactJS part for handling the dom node, which is the main purpose of ReactJS.
A sample from the links below is here
define([
'react',
'esri/toolbars/draw',
'esri/geometry/geometryEngine',
'dojo/topic',
'dojo/on',
'helpers/NumFormatter'
], function(
React,
Draw, geomEngine,
topic, on,
format
) {
var fixed = format(3);
var DrawToolWidget = React.createClass({
getInitialState: function() {
return {
startPoint: null,
btnText: 'Draw Line',
distance: 0,
x: 0,
y: 0
};
},
componentDidMount: function() {
this.draw = new Draw(this.props.map);
this.handler = this.draw.on('draw-end', this.onDrawEnd);
this.subscriber = topic.subscribe(
'map-mouse-move', this.mapCoordsUpdate
);
},
componentWillUnMount: function() {
this.handler.remove();
this.subscriber.remove();
},
onDrawEnd: function(e) {
this.draw.deactivate();
this.setState({
startPoint: null,
btnText: 'Draw Line'
});
},
mapCoordsUpdate: function(data) {
this.setState(data);
// not sure I like this conditional check
if (this.state.startPoint) {
this.updateDistance(data);
}
},
updateDistance: function(endPoint) {
var distance = geomEngine.distance(this.state.startPoint, endPoint);
this.setState({ distance: distance });
},
drawLine: function() {
this.setState({ btnText: 'Drawing...' });
this.draw.activate(Draw.POLYLINE);
on.once(this.props.map, 'click', function(e) {
this.setState({ startPoint: e.mapPoint });
// soo hacky, but Draw.LINE interaction is odd to use
on.once(this.props.map, 'click', function() {
this.onDrawEnd();
}.bind(this));
}.bind(this))
},
render: function() {
return (
<div className='well'>
<button className='btn btn-primary' onClick={this.drawLine}>
{this.state.btnText}
</button>
<hr />
<p>
<label>Distance: {fixed(this.state.distance)}</label>
</p>
</div>
);
}
});
return DrawToolWidget;
});
Below are the links where you can find information in detail.
http://odoe.net/blog/esrijs-reactjs/
https://geonet.esri.com/people/odoe/blog/2015/04/01/esrijs-with-reactjs-updated

How to render the React component with dynamic data realtime from socket.io high efficiency

My front-end page is made by React + Flux, which sends the script data to back-end nodejs server.
The script data is an Array which contains the linux shell arguments (more than 100000). When to back-end received, it will execute the linux shell command.
Just an example:
cat ~/testfile1
cat ~/testfile2
.
.
.
(100000 times ...etc)
When the backend finished one of the linux shell commands, I can save the readed content to result data. Therefore, socket.io will emit the result data to the front-end.
I want to get the result data from my webpage in real time, so I have done some stuff in my project below.
My React component code:
import React from 'react';
import AppActions from '../../../actions/app-actions';
import SocketStore from '../../../stores/socket-store';
import ResultStore from '../../../stores/result-store';
function getSocket () {
return SocketStore.getSocket();
}
function getResult () {
return ResultStore.getResultItem();
}
class ListResultItem extends React.Component {
constructor () {
super();
}
render () {
return <li>
{this.props.result.get('name')} {this.props.result.get('txt')}
</li>;
}
}
class ShowResult extends React.Component {
constructor () {
super();
this.state = {
socket: getSocket(),
result: getResult()
};
}
componentWillMount () {
ResultStore.addChangeListener(this._onChange.bind(this));
}
_onChange () {
this.setState({
result: getResult()
});
}
render () {
return <div>
<ol>
{this.state.result.map(function(item, index) {
return <ListResultItem key={index} result={item} />;
})}
</ol>
</div>;
}
componentDidMount () {
this.state.socket.on('result', function (data) {
AppActions.addResult(data);
});
}
}
My Flux store (ResultStore) code:
import AppConstants from '../constants/app-constants.js';
import { dispatch, register } from '../dispatchers/app-dispatcher.js';
import { EventEmitter } from 'events';
import Immutable from 'immutable';
const CHANGE_EVENT = 'changeResult';
let _resultItem = Immutable.List();
const _addResult = (result) => {
let immObj = Immutable.fromJS(result);
_resultItem = _resultItem.push(immObj);
}
const _clearResult = () => {
_resultItem = _resultItem.clear();
}
const ResultStore = Object.assign(EventEmitter.prototype, {
emitChange (){
this.emit( CHANGE_EVENT );
},
addChangeListener (callback) {
this.on(CHANGE_EVENT, callback);
},
removeChangeListener (callback) {
this.removeListener(CHANGE_EVENT, callback);
},
getResultItem () {
return _resultItem;
},
dispatcherIndex: register(function (action) {
switch (action.actionType) {
case AppConstants.ADD_RESULT:
_addResult(action.result);
break;
case AppConstants.CLEAR_RESULT:
_clearResult();
break;
}
ResultStore.emitChange();
})
});
However, the page will become very slow after rendering more than 1000 datas. How to enhance the performance for rendering? I need to execute the linux script persistently more than 3 days. Any solutions? Thanks~
Is there any need to render all the data on screen? If not then there are a few ways to deal with cutting down the amount of visible data.
Filter / Search
You can provide a search/filter component that complements the list and creates a predicate function that can be used to determine whether each item should or should not be rendered.
<PredicateList>
<Search />
<Filter />
{this.state.result
.filter(predicate)
.map(function(item, index) {
return <ListResultItem key={index} result={item} />;
})
}
</PredicateList>
Lazy Load
Load the items only when they are asked for. You can work out whether item is needed by keeping track of whether it would be onscreen, or whether the mouse was over it.
var Lazy = React.createClass({
getInitialState: function() {
return { loaded: false };
},
load: function() {
this.setState({ loaded: true });
},
render: function() {
var loaded = this.state.loaded,
component = this.props.children,
lazyContainer = <div onMouseEnter={this.load} />;
return loaded ?
component
lazyContainer;
}
});
Then simply wrap your data items inside these Lazy wrappers to have them render when they are requested.
<Lazy>
<ListResultItem key={index} result={item} />
</Lazy>
This ensures that only data needed by the user is seen. You could also improve the load trigger to work for more complex scenarios, such as when the component has been onscreen for more then 2 seconds.
Pagination
Finally, the last and most tried and tested approach is pagination. Choose a limit for a number of data items that can be shown in one go, then allow users to navigate through the data set in chunks.
var Paginate = React.createClass({
getDefaultProps: function() {
return { items: [], perPage: 100 };
},
getInitialState: function() {
return { page: 0 };
},
next: function() {
this.setState({ page: this.state.page + 1});
},
prev: function() {
this.setState({ page: this.state.page - 1});
},
render: function() {
var perPage = this.props.perPage,
currentPage = this.state.page,
itemCount = this.props.items.length;
var start = currentPage * perPage,
end = Math.min(itemCount, start + perPage);
var selectedItems = this.props.items.slice(start, end);
return (
<div className='pagination'>
{selectedItems.map(function(item, index) {
<ListResultItem key={index} result={item} />
})}
<a onClick={this.prev}>Previous {{this.state.perPage}} items</a>
<a onClick={this.next}>Next {{this.state.perPage}} items</a>
</div>
);
}
});
These are just very rough examples of implementations for managing the rendering of large amounts of data in efficient ways, but hopefully they will make enough sense for you to implement your own solution.

Resources