How to play video from m3u8 (HLS) - pixi.js

I downloaded https://pixijs.io/examples/#/sprite/video.js and of course it worked fine. I'd like to play an m3u8 though, such as https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8. I tried modifying the texture creation in a few different ways:
const texture = PIXI.Texture.from('https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8');
const texture = PIXI.VideoBaseTexture.fromUrl('https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8');
const texture = PIXI.VideoBaseTexture.fromUrl({ src: 'https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8', mime: 'application/vnd.apple.mpegurl' });
None of these worked though (using Chrome in Windows), so how exactly should I modify the sample program to get it to play this sample m3u8?

I was having same problem. I was unable to play some m3u8 urls using HTML's simple video tag.
Hence, I found below js which worked fine with my code.
<script src="https://cdn.jsdelivr.net/npm/hls.js#latest"></script>
<!-- Or if you want a more recent canary version -->
<!-- <script src="https://cdn.jsdelivr.net/npm/hls.js#canary"></script> -->
<video id="video"></video>
<script>
var video = document.getElementById('video');
if(Hls.isSupported()) {
var hls = new Hls();
hls.loadSource('https://video-dev.github.io/streams/x36xhzz/x36xhzz.m3u8');
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED,function() {
video.play();
});
}
else if (video.canPlayType('application/vnd.apple.mpegurl')) {
video.src = 'https://video-dev.github.io/streams/x36xhzz/x36xhzz.m3u8';
video.addEventListener('loadedmetadata',function() {
video.play();
});
}
</script>
Full documentation given here.
Also there is a brief description which I would like to share about m3u8 files.
M3U8 File

Related

Why youtube-dl u3m8 to react js get (cors error)

I have a site that shows a live broadcast from youtube
Website built from reactjs and nodejs
In frontend I use video.js and hls.js, to view a live broadcast from M3U8 links, and it works for me with a sample link:
https://multiplatformf.akamaihd.net/i/multi/will/bunny/big_buck_bunny_,640x360_400,640x360_700,640x360_1000,950x540_1500,.f4v.csmil/master.m3u8
In nodejs (backEnd) I use youtube-dl model to create or get an M3U8 link, I send it to react and put it in the src value in video html tag.
react component code:
let videoSrc = data.url;
if (Hls.isSupported()) {
var hls = new Hls();
hls.loadSource(videoSrc);
hls.attachMedia(video);
}
node js code:
let video = youtubedl(`yotubeUrl`, [])
let stream = this;
video.on('info', function (info) {
console.log(info.url)
stream.updateYoutubeStream(info.url) // personal function to update m3u8 url
}).on('error', err => {
console.log(err)
})
But he gives me a cors error
enter image description here
And unable to connect with me
What is the problem? And how can it be fixed?
Thank you very much

React: add HTML from generic file path at server-side build time

The use case I'm trying to fulfill:
Admin adds SVG along with new content in CMS, specifying in the CMS which svg goes with which content
CMS commits change to git (Netlify CMS)
Static site builds again
SVG is added inline so that it can be styled and/or animated according to the component in which it occurs
Now - I can't figure out a clean way to add the SVG inline. My logic tells me - everything is available at build time (the svgs are in repo), so I should be able to simply inline the svgs. But I don't know how to generically tell React about an svg based on variables coming from the CMS content. I can import the svg directly using svgr/weback, but then I need to know the file name while coding, which I don't since it's coming from the CMS. I can load the svg using fs.readFileSync, but then the SVG gets lost when react executes client-side.
I added my current solution as an answer, but it's very hacky. Please tell me there's a better way to do this with react!
Here is my current solution, but it's randomly buggy in dev mode and doesn't seem to play well with next.js <Link /> prefetching (I still need to debug this):
I. Server-Side Rendering
Read SVG file path from CMS data (Markdown files)
Load SVG using fs.readFileSync()
Sanitize and add the SVG in React
II. Client-Side Rendering
Initial Get:/URL response contains the SVGs (ssr worked as intended)
Read the SVGs out of the DOM using HTMLElement.outerHTML
When React wants to render the SVG which it doesn't have, pass it the SVG from the DOM
Here is the code.
import reactParse from "html-react-parser";
import DOMPurify from "isomorphic-dompurify";
import * as fs from "fs";
const svgs = {}; // { urlPath: svgCode }
const createServerSide = (urlPath) => {
let path = "./public" + urlPath;
let svgCode = DOMPurify.sanitize(fs.readFileSync(path));
// add id to find the SVG client-side
// the Unique identifier is the filepath for the svg in the git repository
svgCode = svgCode.replace("<svg", `<svg id="${urlPath}"`);
svgs[urlPath] = svgCode;
};
const readClientSide = (urlPath) => {
let svgElement = document.getElementById(urlPath);
let svgCode = svgElement.outerHTML;
svgs[urlPath] = svgCode;
};
const registerSVG = (urlPath) => {
if (typeof window === "undefined") {
createServerSide(urlPath);
} else {
readClientSide(urlPath);
}
return true;
};
const inlineSVGFromCMS = (urlPath) => {
if (!svgs[urlPath]) {
registerSVG(urlPath);
}
return reactParse(svgs[urlPath]);
};
export default inlineSVGFromCMS;

Using konva on a nodejs backend without konva-node

We are a team of 5 developers working on a video rendering implementation. This implementation consists out of two parts.
A live video preview in the browser using angular + konva.
A node.js (node 14) serverless (AWS lambda container) implementation using konva-node that pipes frames to ffmpeg for rendering a mp4 video in higher quality for later download.
Both ways are working for us. Now we extracted the parts of the animation that are the same for frontend and backend implementation to an internal library. We imported them in BE and FE. That also works nicely for most parts.
We noticed here that konva-node is deprecated since a short time. Documentation says to use canvas + konva instead on node.js. But this just doesn't work. If we don't use konva-node we cannot create a stage without a 'container' value. Also we cannot create a raw image buffer anymore, because stage.toCanvas() actually returns a HTMLCanvas, which does not have this functionality.
So what does konva-node actually do to konva API?
Is node.js still supported after deprecation of konva-node?
How can we get toBuffer() and new Stage() functionality without konva-node in node.js?
backend (konva-node)
import konvaNode = require('konva-node');
this.stage = new konvaNode.Stage({
width: stageSize.width,
height: stageSize.height
});
// [draw stuff on stage here]
// create raw frames to pipe to ffmpeg
const frame = await this.stage.toCanvas();
const buffer: Buffer = frame.toBuffer('raw');
frontend (konva)
import Konva from 'konva';
this.stage = new Konva.Stage({
width: stageSize.width,
height: stageSize.height,
// connect stage to html element in browser
container: 'container'
});
// [draw stuff on stage here]
Finally in an ideal world (if we could just Konva in frontend and backend without konva-node the following should be possible for a shared code.
loading images
public static loadKonvaImage(element, canvas): Promise<any> {
return new Promise(resolve => {
let image;
if (canvas) {
// node.js canvas image
image = new canvas.Image();
} else {
// html browser image
image = new Image();
}
image.src = element.url;
image.onload = function () {
const konvaImage = new Konva.Image(
{image, element.width, element.height});
konvaImage.cache();
resolve(konvaImage);
};
});
}
Many props to the developer for the good work. We would look forward to use the library for a long time, but how can we if some core functionality that we rely on is outdated shortly after we started the project?
Another stack overflow answer mentioned Konva.isBrowser = false;. Maybe this is used to differentiate between a browser and a node canvas?
So what does konva-node actually do to konva API?
It slightly patches Konva code to use canvas nodejs library to use 2d canvas API. So, Konva will not use browser DOM API.
Is node.js still supported after deprecation of konva-node?
Yes. https://github.com/konvajs/konva#4-nodejs-env
How can we get toBuffer() and new Stage() functionality without konva-node in node.js?
You can try to use this:
const canvas = layer.getNativeCanvasElement();
const buffer = canvas.toBuffer();
We have solved the problems we had the following way:
create stage (shared between Be+FE)
public static createStage(stageWidth: number, stageHeight: number, canvas?: any): Konva.Stage {
const stage = new Konva.Stage({
width: stageWidth,
height: stageHeight,
container: canvas ? canvas : 'container'
});
return stage;
}
create raw image buffer (BE)
const frame: any = await this.stage.toCanvas();
const buffer: Buffer = frame.toBuffer('raw');
loading images (shared between Be+FE)
public static loadKonvaImage(element, canvas?: any): Promise<Konva.Image> {
return new Promise(resolve => {
const image = canvas ? new canvas.Image() : new Image();
image.src = element.url;
image.onload = function () {
const konvaImage = new Konva.Image(
{image, element.width, element.height});
konvaImage.cache();
resolve(konvaImage);
};
});
}
Two things we had to do.
We have rewritten our whole backend and library code to use ESM modules and we got rid of konva-node and konva 7 in general.
We defined the node module canvas in all places as any. It seems like Konva accepts more inputs than expected and like specified in the type interfaces of the classes. canvas is only installed in the backend and inserted in some library methods like shown above.
#lavrton nice to hear from you. Your answer might also work for getting the Buffer, but you didn't answer on how to create the stage. Luckily we found a solution for both issues.

Electron PDF viewer

I have an Electron app that loads URL from PHP server. And the page contains an iFrame having a source to PDF. The PDF page seems absolutely ok in a normal web browser but asks for download in Electron. Any help?
My codes for html page is
<h1>Hello World!</h1>
Some html content here...
<iframe src="http://mozilla.github.io/pdf.js/web/compressed.tracemonkey-pldi-09.pdf" width="1200" height="800"></iframe>
And my js code is something like
mainWindow = new BrowserWindow({width: 800, height: 600})
mainWindow.loadURL(url.format({
pathname: path.join(__dirname, 'index.html'),
protocol: 'file:',
slashes: true
}))
app.on('ready', createWindow)
Any help would be really greatful...
Electron is shipping already with an integrated PDF viewer.
So you can load PDF files just like normal HTML files, the PDF viewer will automatically show up.
E.g. in BrowserWindow with .loadURL(…), in <iframes>, <object> and also with the, at the moment discouraged, <webview>.
PS: The need to enable the plugins property in the BrowserWindow or <webview> is no more needed since Electron 9.
You will need
https://github.com/gerhardberger/electron-pdf-window
Example:
const { app } = require('electron')
const PDFWindow = require('electron-pdf-window')
app.on('ready', () => {
const win = new PDFWindow({
width: 800,
height: 600
})
win.loadURL('http://mozilla.github.io/pdf.js/web/compressed.tracemonkey-pldi-09.pdf')
})
This answer will focus on implementation with Angular.
After year of waiting (to be solved by the Electron) finally I decided to apply a workaround. For the people who needs it done, here it goes. Workaround comes with a cost of increasing bundle size totally 500K. (For Angular)
Workaround to use Mozilla PDF.js library.
NPM
GitHub
Implementation 1 (Setting nodeIntegration: true)
This implementation has no issue, you can implement by the document of the library mentioned. But if you run into additional problem like creating white window when route is changed, it is due to the setting nodeIntegration property to true. If so, use the following implementation.
Implementation 2 (Setting nodeIntegration: false)
This is the default by Electron. Using this configuration and viewing the PDF is bit tricky. Solution is to use Uint8Array instead of a blob or base64.
You can use the following function to convert base64 to Uint8Array.
base64ToArrayBuffer(data): Uint8Array {
const input = data.substring(data.indexOf(',') + 1);
const binaryString = window.atob(input ? input : data);
const binaryLen = binaryString.length;
const bytes = new Uint8Array(binaryLen);
for (let i = 0; i < binaryLen; i++) {
const ascii = binaryString.charCodeAt(i);
bytes[i] = ascii;
}
return bytes;
}
Or convert blob to array buffer
const blob = response;
let arrayBuffer = null;
arrayBuffer = await new Response(blob).arrayBuffer();
then pass the generated Uint8Array as the pdfSource to the ng2-pdfjs-viewer.
HTML
<ng2-pdfjs-viewer zoom="100" [pdfSrc]="pdfSource"></ng2-pdfjs-viewer>
Electron 9.0.0 has enabled PDF viewer already.
npm install electron#9.0.0

Playing Flash audio with JavaScript

I just need some direction on how to go about playing an audio clip on an on click from Javascript.
In order to talk back and forth from flash to javascript you should use the flash.external.ExternalInterface class and callbacks.
Flash as3
import flash.external.ExternalInterface;
import flash.net.URLRequest;
//Create the javascript "playSound" on the swf object
ExternalInterface.addCallback("playSound", playSound);
//Create our sound object
var sound:Sound = new Sound;
//Load my.mp3 into our sound object
sound:Sound .load(new URLRequest("my.mp3"));
function playSound(){
sound.play();
}
HTML Page
<script language="javascript">
var swf;
//Wait for page load to complete
window.onload = init;
//initialize our swf variable where mySWF is the id of the swf object on the page
function init(){
swf = document.getElementById("mySWF");
}
//call our external function on the swf
function playSound(){
swf.playSound();
}
</script>
Forgive me for any errors, the code is untested but should give you the right idea.

Resources