All,
I am new to React and Node and I am trying to create a react app to manage Azure blob storage and found Microsoft documentation really helpful.
https://learn.microsoft.com/en-us/azure/storage/blobs/quickstart-blobs-javascript-browser
This documentation helps to build what I want in node but I can't figure out how to import these functionalities to React to include them in my UI. For example I want to create a button in react that list blobs, this functionality can be achieved in node with the following code:
const listFiles = async () => {
fileList.size = 0;
fileList.innerHTML = "";
try {
reportStatus("Retrieving file list...");
let iter = containerClient.listBlobsFlat();
let blobItem = await iter.next();
while (!blobItem.done) {
fileList.size += 1;
fileList.innerHTML += `<option>${blobItem.value.name}</option>`;
blobItem = await iter.next();
}
if (fileList.size > 0) {
reportStatus("Done.");
} else {
reportStatus("The container does not contain any files.");
}
} catch (error) {
reportStatus(error.message);
}
};
listButton.addEventListener("click", listFiles);
//Declare the field for UI element
const listButton = document.getElementById("list-button");
Now how I could link this functionality into a button in react, I tried in vain something like:
<button id="list-button">List files</button>
Could you help me to solve this or point me to a source where I can learn more about how to solve similar issues and best practices to connect frontend and backend.
A beginner friendly starting point, if you are new to react is reactjs official tutorial. Here you will learn the basics of building a reactjs application. You can then proceed to learn more on the reactjs docs, especially lists and keys for your case. Then you can learn how to call APIs AJAX and APIs. There are tons of resources out there, which can confuse you if you are getting started, skicking to one source until you are comfortable to check other resources can work wonders.
Related
We are a team of 5 developers working on a video rendering implementation. This implementation consists out of two parts.
A live video preview in the browser using angular + konva.
A node.js (node 14) serverless (AWS lambda container) implementation using konva-node that pipes frames to ffmpeg for rendering a mp4 video in higher quality for later download.
Both ways are working for us. Now we extracted the parts of the animation that are the same for frontend and backend implementation to an internal library. We imported them in BE and FE. That also works nicely for most parts.
We noticed here that konva-node is deprecated since a short time. Documentation says to use canvas + konva instead on node.js. But this just doesn't work. If we don't use konva-node we cannot create a stage without a 'container' value. Also we cannot create a raw image buffer anymore, because stage.toCanvas() actually returns a HTMLCanvas, which does not have this functionality.
So what does konva-node actually do to konva API?
Is node.js still supported after deprecation of konva-node?
How can we get toBuffer() and new Stage() functionality without konva-node in node.js?
backend (konva-node)
import konvaNode = require('konva-node');
this.stage = new konvaNode.Stage({
width: stageSize.width,
height: stageSize.height
});
// [draw stuff on stage here]
// create raw frames to pipe to ffmpeg
const frame = await this.stage.toCanvas();
const buffer: Buffer = frame.toBuffer('raw');
frontend (konva)
import Konva from 'konva';
this.stage = new Konva.Stage({
width: stageSize.width,
height: stageSize.height,
// connect stage to html element in browser
container: 'container'
});
// [draw stuff on stage here]
Finally in an ideal world (if we could just Konva in frontend and backend without konva-node the following should be possible for a shared code.
loading images
public static loadKonvaImage(element, canvas): Promise<any> {
return new Promise(resolve => {
let image;
if (canvas) {
// node.js canvas image
image = new canvas.Image();
} else {
// html browser image
image = new Image();
}
image.src = element.url;
image.onload = function () {
const konvaImage = new Konva.Image(
{image, element.width, element.height});
konvaImage.cache();
resolve(konvaImage);
};
});
}
Many props to the developer for the good work. We would look forward to use the library for a long time, but how can we if some core functionality that we rely on is outdated shortly after we started the project?
Another stack overflow answer mentioned Konva.isBrowser = false;. Maybe this is used to differentiate between a browser and a node canvas?
So what does konva-node actually do to konva API?
It slightly patches Konva code to use canvas nodejs library to use 2d canvas API. So, Konva will not use browser DOM API.
Is node.js still supported after deprecation of konva-node?
Yes. https://github.com/konvajs/konva#4-nodejs-env
How can we get toBuffer() and new Stage() functionality without konva-node in node.js?
You can try to use this:
const canvas = layer.getNativeCanvasElement();
const buffer = canvas.toBuffer();
We have solved the problems we had the following way:
create stage (shared between Be+FE)
public static createStage(stageWidth: number, stageHeight: number, canvas?: any): Konva.Stage {
const stage = new Konva.Stage({
width: stageWidth,
height: stageHeight,
container: canvas ? canvas : 'container'
});
return stage;
}
create raw image buffer (BE)
const frame: any = await this.stage.toCanvas();
const buffer: Buffer = frame.toBuffer('raw');
loading images (shared between Be+FE)
public static loadKonvaImage(element, canvas?: any): Promise<Konva.Image> {
return new Promise(resolve => {
const image = canvas ? new canvas.Image() : new Image();
image.src = element.url;
image.onload = function () {
const konvaImage = new Konva.Image(
{image, element.width, element.height});
konvaImage.cache();
resolve(konvaImage);
};
});
}
Two things we had to do.
We have rewritten our whole backend and library code to use ESM modules and we got rid of konva-node and konva 7 in general.
We defined the node module canvas in all places as any. It seems like Konva accepts more inputs than expected and like specified in the type interfaces of the classes. canvas is only installed in the backend and inserted in some library methods like shown above.
#lavrton nice to hear from you. Your answer might also work for getting the Buffer, but you didn't answer on how to create the stage. Luckily we found a solution for both issues.
I'm using Next.js for my side project. I have a PostrgeSQL database hosted on ElephantSQL. Inside the Next.js project, I have a GraphQL API set up, using the apollo-server-micro package.
Inside the file where the GraphQL API is set up (/api/graphql), I import a database helper-module. Inside that, I set up a pool connection and export a function which uses a client from the pool to execute a query and return the result. This looks something like this:
// import node-postgres module
import { Pool } from 'pg'
// set up pool connection using environment variables with a maximum of three active clients at a time
const pool = new Pool({ max: 3 })
// query function which uses next available client to execute a single query and return results on success
export async function queryPool(query) {
let payload
// checkout a client
try {
// try executing queries
const res = await pool.query(query)
payload = res.rows
} catch (e) {
console.error(e)
}
return payload
}
The problem I'm running into, is that it appears as though the Next.js API doesn't (always) keep the connection alive but rather opens up a new one (either for every connected user or maybe even for every API query), which results in the database quickly running out of connections.
I believe that what I'm trying to achieve is possible for example in AWS Lambda (by setting context.callbackWaitsForEmptyEventLoop to false).
It is very possible that I don't have a proper understanding of how serverless functions work and this might not be possible at all but maybe someone can suggest me a solution.
I have found a package called serverless-postgres and I wonder if that might be able to solve it but I'd prefer to use the node-postgres package instead as it has much better documentation. Another option would probably be to move away from the integrated API functionality entirely and build a dedicated backend-server, which maintains the database connection but obviously this would be a last resort.
I haven't stress-tested this yet, but it appears that the mongodb next.js example, solves this problem by attaching the database connection to global in a helper function. The important bit in their example is here.
Since the pg connection is a bit more abstract than mongodb, it appears this approach just takes a few lines for us pg enthusiasts:
// eg, lib/db.js
const { Pool } = require("pg");
if (!global.db) {
global.db = { pool: null };
}
export function connectToDatabase() {
if (!global.db.pool) {
console.log("No pool available, creating new pool.");
global.db.pool = new Pool();
}
return global.db;
}
then in, eg, our API route, we can just:
// eg, pages/api/now
export default async (req, res) => {
const { pool } = connectToDatabase();
try {
const time = (await pool.query("SELECT NOW()")).rows[0].now;
res.end(`time: ${time}`);
} catch (e) {
console.error(e);
res.status(500).end("Error");
}
};
I have a simply nodeJS app that parses a youtube URL that stores a user's watch history and lets them bookmark videos. I store the watch history and bookmarks in localStorage.
I need to provide two routes in nodeJS to list the current history, and save a URL into a history table.
I have no back-end experience, but have managed to create the app using nodeJS. Any suggestions are welcome!
Relevant code:
//parse youtube url and change iframe src
function loadVideo(videoURL){
//split youtube url and get ID
var videoID = videoURL.split("v=")[1];
//change iframe src
videoSource = `https://www.youtube.com/embed/${videoID}`;
videoPlayer.src = videoSource;
logHistory(videoSource)
}
//create div element
function logHistory(videoSource){
var newHistory = document.createElement("li");
newHistory.classList.add("history-item");
newHistory.innerText = videoSource;
historyContainer.appendChild(newHistory);
//store element in localStorage
var historyCount = historyContainer.children.length;
localStorage.setItem("History item "+historyCount, videoInput.value)
}
LocalStorage is a frontend concept. It is accessible to JavaScript running in the browser. Node.js resides on backend and can not access the LocalStorage directly.
You'd need to access it through the scripts running in frontend whcih can communicate with backend using Ajax requests.
Note. This answer makes use of socket.io which was not in the question asked but allows the node server to communicate with client side scripts.
I'm going to share an example from an app I've created where I used node.js [and socket.io] and also local storage. Much like the others said, this cannot be done directly from your node.js server.
This code is from my server.js (node.js server) file.
function setStartingPlayer(startingPlayer, privateUsers) {
//send starting player name to
io.emit('set-starting-player', startingPlayer, privateUsers);
}
It emits to the sockets two pieces of data: startingPlayer & privateUsers. In this example startingPlayer is a player's user name and privateUsers is an array of names of current players.
In the client script there is a function like this:
socket.on('set-starting-player', function(name, privateUsers) {
setStorage('turnIndicator', username);
var username = getStorage('username');
if (name === username) {
console.log('I am the starting player. My name is ' + username);
}
let playerInGame = checkPrivateUsers(privateUsers);
if (playerInGame === true) {
updateTurnIndicator(name);
}
});
This function accesses local storage in two ways, it stores data to local storage and it gets data from local storage. This is an example of how you can pass data from Node.js to a client script and access local storage.
Worth noting:
I have two javascript functions, this is how I access local storage.
function setStorage(key,info) {
localStorage.setItem(key, JSON.stringify(info));
}
function getStorage(key) {
var item = localStorage.getItem(key);
return JSON.parse(item);
}
If you have questions let me know. Hope this helps.
I'm pretty new to web dev and AngularJS. I'm trying to figure out how to use services and I'm following this tutorial: http://scotch.io/bar-talk/setting-up-a-mean-stack-single-page-application
How does the service connect with the controller? Is this done implicitly? I understand that you can inject the service into the controller, but how is it being done in the tutorial?
You inject your service into your controller. Like this. The reason behind services are that you want to keep your controller as 'skinny' as possible. All heavy logic/requests should be outsourced to the service.
app.service('myService', function(){
this.name = 'Tyler';
}
app.controller('myCtrl', function($scope, myService){
$scope.name = myService.name;
}
Another benefit of using a service is that you could inject that service into more than one controller. A good example is if you had a service that made a HTTP request. Instead of recreating the same code in every controller to make the request, you could simply create a service that did the request and inject that service into every controller you needed that functionality.
edit: To answer your question. You need to be sure to place the service in the controller on the same 'module'. Meaning. In your HTML you have something like this.
<body ng-app="myApp">
That's telling the whole BODY that whatever is nested inside it belongs to the 'myAPP' app. Then you usually have an app.js file that has something like this.
var app = angular.module('myApp', []);
Notice that the angular.module takes two parameters. You're telling angular to create a new app called 'myApp' (which coincides with your HTML).
Then in your controller, service, directive files you'll have something like this at the top.
var app = angular.module('myApp');
Notice this one is only taking one parameter, the name of the app. You're telling angular that instead of creating a new app, you're going and 'getting' the one you already build. You'll then stick your controllers, directives, services on this app and as long as things are on the same app, you'll be able to inject them.
Another EDIT to your comment.
In the tutorial they're doing it a little weird. They're creating new modules for every controller, service, etc. It's not bad, just different. Doing it this way confuses me so I just prefer to stick everything under one module. In the tutorial this is the line that's gluing it all together.
// public/js/app.js
angular.module('sampleApp', ['ngRoute', 'appRoutes', 'MainCtrl', 'NerdCtrl', 'NerdService', 'GeekCtrl', 'GeekService']);
They have a sampleApp then all there other modules they build are being injected into the main sample app.
a service means to be accesible from all controllers, a service is a constructor, every controller can read or write in a service, in order to use a service you must call a service in this way:
var app = angular.module('myApp', []);
app.service('sharedProperties', function() {
var stringValue = 'test string value';
var objectValue = {
data: 'test object value'
};
return {
getString: function() {
return stringValue;
},
setString: function(value) {
stringValue = value;
},
getObject: function() {
return objectValue;
}
}
});
app.controller('myController1', function($scope, sharedProperties) {
$scope.setOnController1 = function(sharedPoperties){
$scope.stringValue = sharedProperties.getString();
$scope.objectValue = sharedProperties.getObject().data;
}
});
app.controller('myController2', function($scope, sharedProperties) {
$scope.stringValue = sharedProperties.getString;
$scope.objectValue = sharedProperties.getObject();
$scope.setString = function(newValue) {
$scope.objectValue.data = newValue;
sharedProperties.setString(newValue);
//some code to set values on screen at controller1
};
});
Here is the JS FIDDLE
http://jsfiddle.net/b2fCE/228/
I've written a RESTful node.js service as a backend for http://www.cross-copy.net and would like to not only track usage of the web-client but also other clients (like commandline or Apps) which use the service for inter-device copy/paste. Is it possible to embed the Google Analytics JavaScript API into a node.js application and do server-side tracking?
Since all of the answers are really old, I will mention a new npm package:
https://www.npmjs.com/package/universal-analytics
It's really great and incredible easy to use.
Install universal analytics
npm install universal-analytics --save
In your routes file, require the module. (Replace process.env.GA_ACCOUNT with string like 'UA-12345678-1')
// Init GA client
var ua = require('universal-analytics');
var visitor = ua(process.env.GA_ACCOUNT);
Now inside your endpoint functions, you can track a pageview. (Replace request.url with the current url string like '/api/users/1')
// Track pageview
visitor.pageview(request.url).send();
Read the documentation on UA for more info on this module.
As Brad rightfully sad, there was nothing for Node... So I wrote a nodejs module tailored for this these last few days and just published it on NPM: node-ga
The module is still really new (barely trying it in production on a pet project), so don't hesitate to give your input :)
You won't be able to just drop ga.js into your Node project. It has to be loaded in a browser to function correctly.
I don't believe there is anything out there for Node yet (correct me if I'm wrong!), but you should be able to easily adapt the existing PHP classes for doing logging server-side:
https://developers.google.com/analytics/devguides/collection/other/mobileWebsites
You can see how the URL to request the tracking GIF is constructed within ga.php. Translate ga.php to JS and you're set.
$utmGifLocation = "http://www.google-analytics.com/__utm.gif";
// Construct the gif hit url.
$utmUrl = $utmGifLocation . "?" .
"utmwv=" . VERSION .
"&utmn=" . getRandomNumber() .
"&utmhn=" . urlencode($domainName) .
"&utmr=" . urlencode($documentReferer) .
"&utmp=" . urlencode($documentPath) .
"&utmac=" . $account .
"&utmcc=__utma%3D999.999.999.999.999.1%3B" .
"&utmvid=" . $visitorId .
"&utmip=" . getIP($_SERVER["REMOTE_ADDR"]);
I tried out node-ga, but didn't get event tracking to work. nodealytics did the job.
See Core Reporting API Client Libraries & Sample Code (v3).
There is also the following version: Google APIs Client Library for Node.js (alpha).
I wrote a script to query data with Node.js from Googles Analytics Core Reporting API (v3). The script and a detailed setup description is available here.
Here is the script part:
'use strict';
var googleapi = require('googleapis');
var ApiKeyFile = require('mywebsiteGAapi-6116b1dg49a1.json');
var viewID = 'ga:123456700';
var google = getdefaultObj(googleapi);
var Key = getdefaultObj(ApiKeyFile);
function getdefaultObj(obj) { return obj && obj.__esModule ? obj : { default: obj }; }
var jwtClient = new google.default.auth.JWT(Key.default.client_email, null, Key.default.private_key, ['https://www.googleapis.com/auth/analytics.readonly'], null);
jwtClient.authorize(function (err, tokens) {
if (err) {
console.log(err);
return;
}
var analytics = google.default.analytics('v3');
queryData(analytics);
});
function queryData(analytics) {
analytics.data.ga.get({
'auth': jwtClient,
'ids': viewID,
'metrics': 'ga:users,ga:pageviews',
'start-date': 'yesterday',
'end-date': 'today',
}, function (err, response) {
if (err) {
console.log(err);
return;
}
console.log(JSON.stringify(response, null, 4));
});
}