I don't know to play video with Phaser v3 framework. I was tried some ways but it don't work. Anyone help me pls.
this.load.video('myVideo1', 'assets/video/video1.mp4');
For Phaser 3, the video support is only for v3.20 or upper: https://phaser.io/download/release/3.20.0
function preload (){
this.load.video('video', '../assets/videos/video.webm');
}
function create (){
this.video = this.add.video(0, 0, 'video');
this.video.play();
}
Related
Problem: I made simple electron project on raspberry-pi, It play an audio file but somehow that cannot play.
My Efforts
I tried already lots of other audio files.
Removed node_modules, nodejs and npm from my raspberry-pi 3b+ device and reinstall it.
I tried also some Electron Versions 9.0.0, 9.3.1, 10.0.0.
I tried to update and upgrade my raspberry pi device.
I used one simple nodejs file and used a play-sound npm module and play audio file and yes its play but the same code I tried in my electron application did not play.
I tried also bellow code snippet with using button click.
I also used AudioContext and AudioBuffer to play sound but again failed.
Electron Versions : ^9.0.0, ^9.3.1
Node JS Version : 10.19.0
Electron Versions : 5.8.0
Device: Raspberry Pi 3B+
My Audio Code
var audio = new Audio('beep.mp3');
audio.volume = 1.0;
setInterval(()=>{
audio.play();
},1000);
Simple Node JS File With npm module code
var player = new require('play-sound')(opts={});
setInterval(()=>{
player.play('beep.mp3', function(err){
console.log('Err :', err);
});
},1000);
Description: Above code is perfect working in simple node js but can not work with electron.
Electron index.js (main process)
const { app, BrowserWindow } = require('electron');
function createWindow () {
const win = new BrowserWindow({
width: 800,
height: 600,
webPreferences: {
nodeIntegration: true
}
})
win.loadFile('index.html');
win.webContents.openDevTools();
}
app.whenReady().then(createWindow)
app.on('window-all-closed', () => {
if (process.platform !== 'darwin') {
app.quit();
}
});
app.on('activate', () => {
if (BrowserWindow.getAllWindows().length === 0) {
createWindow();
}
});
Electron index.html
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Audio Play</title>
</head>
<body>
<script>
var audio = new Audio('beep.mp3');
audio.volume = 1.0;
setInterval(()=>{
audio.play();
},1000);
</script>
</body>
</html>
Conclusion:
I think electron does not have permission to play audio file.
I would start first by having visiting https://chinmay.audio/decodethis/ to see what kinds of audio file your app can play. Or perhaps you can try uploading the audio file to visiting codepen.io/janmonschke/pen/OJyMQXd to see if WebAudio can decode and play the file.
(Sorry, can't post links to codepen without including code. Insert "http://" for the codepen link above.)
When I try the "playing audio streaming audio data" from the naudiodon library I only get noise on the speaker. I'm interested in how to get real sound from an app (for example when playing music from youtube). I wonder if the sound is then saved in my case in stream4800.wav?
I wonder what all the dependency I need for the project?
When I just record over a microphone with inOptions: {} I get a successfully saved stream (sound). But when I want to get the sound out of the speakers outOptions: {} then the story becomes unclear to me.
Here is an example of my code:
const portAudio = require('naudiodon');
const wav = require("wav");
const ao = new portAudio.AudioIO({
outOptions: {
channelCount: 2,
sampleFormat: portAudio.SampleFormat64Bit,
sampleRate: 44100,
}
});
const name = "stream4800.wav";
const file = fs.createReadStream(`./${name}`);
const reader = new wav.Reader();
ao.start();
reader.on("data",chunk=>ao.write(chunk));
file.pipe(reader);
Thanks for any help
Hi please check the audio file [stream4800.wav] is Mono or Sterio. I would recommend you to use sterio file with applicable sampleRate will help you out.
I've been working for a few weeks now on a Discord bot that basically compiles stats on the server and deduces patterns. In order to improve it, I wanted to make it generate graphs as PNGs in order to send them back to the user - in short, no DOM.
In order to achieve this, I'm currenlty using vega (version 5.10.1 - latest) and node-canvas (version 2.6.1 - latest), with nodejs v12.16.1.
I've been scouring the web for help on vega usage, and found a couple contradicting sources. I've been using the example code provided here :
https://vega.github.io/vega/usage/
The thing is that I keep getting this error :
TypeError: Cannot read property 'getContext' of null
message:"Cannot read property 'getContext' of null"
stack:"TypeError: Cannot read property 'getContext' of null
at resize (e:\DEV\GIT REPOS\GITHUB\PERSO\JS\test-StatBot\node_modules\vega-scenegraph\build\vega-scenegraph.js:3665:28)
at CanvasRenderer.prototype$6.resize (e:\DEV\GIT REPOS\GITHUB\PERSO\JS\test-StatBot\node_modules\vega-scenegraph\build\vega-scenegraph.js:3714:5)
at CanvasRenderer.prototype$4.initialize (e:\DEV\GIT REPOS\GITHUB\PERSO\JS\test-StatBot\node_modules\vega-scenegraph\build\vega-scenegraph.js:3294:17)
at CanvasRenderer.prototype$6.initialize (e:\DEV\GIT REPOS\GITHUB\PERSO\JS\test-StatBot\node_modules\vega-scenegraph\build\vega-scenegraph.js:3709:28)
at initializeRenderer (e:\DEV\GIT REPOS\GITHUB\PERSO\JS\test-StatBot\node_modules\vega-view\build\vega-view.js:657:8)
at renderHeadless (e:\DEV\GIT REPOS\GITHUB\PERSO\JS\test-StatBot\node_modules\vega-view\build\vega-view.js:780:12)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async View.renderToCanvas [as toCanvas] (e:\DEV\GIT REPOS\GITHUB\P...
Here is the code which is giving me trouble :
// Imports
const vega = require('vega');
// Render image from given graph spec (statsObject)
async function graphToImage (statsObject) {
graphObject = new vega.View(vega.parse(statsObject), { renderer: 'none'});
const pngName = generateFileName(10);
removeExistingFile(pngName);
graphObject.toCanvas().then(canvas => {
console.log('Writing PNG to file...');
writeFile(`../../../../generated/${pngName}.png`, canvas.toBuffer());
}).catch(err => {
console.log("Error writing PNG to file:");
console.error(err);
});
return pngName;
}
I don't really know how canvas or vega work, and so I have no idea what could be causing this issue and how to fix it... However, the problem seems to be located inside of the toCanvas() method. Any help is much appreciated !
Thanks in advance !
// Using view.toSVG() along with the npm package sharp worked well for me
const view = new vega.View(vega.parse(templateObject), {renderer: 'none'});
view.toSVG().then(async function (svg) {
await sharp(Buffer.from(svg))
.toFormat('png')
.toFile('fileName.png')
}).catch(function(err) {
console.error(err);
});
Edit : I managed to fix my issue, and I am posting the anwser here for future notice :
I succeeded to actually generate a graph picture by rendering the View object straight to an SVG string, by using view.toSVG() instead of the buggy view.toCanvas(), which worked great.
Then, all that was left to do was to convert the obtained SVG string into a PNG file, and that was it.
Here is the updated, working code :
// Imports
const vega = require('vega');
// Render image from given graph object
async function graphToImage (statsObject) {
// Generate a new 10-char hex string
const pngName = generateHexStringName(10);
// Remove any existing file with the same name in order to prevent confusion
removeExistingFile(pngName);
var view = new vega.View(vega.parse(statsObject), {renderer: 'none'});
// Generate an SVG string
view.toSVG().then(async function (svg) {
// Working SVG string
console.log(svg);
// Process obtained SVG string, e. g. write it to PNG file
}).catch(function(err) {
console.error(err);
});
// Return the name of the generated PNG file
return pngName;
}
I´ve already made an app but now I want to play background audio in my app when it launch with endless loop. I know it´s a dumb question but I would be happy if anybody can help me. Thanks in advance. :)
This is a broad question, but here is one example of what you can do.
Make sure you, above your class declaration, import the AVFoundation framework, like so:
import AVFoundation
If you have a .mp3 file and you want to play it and loop it endlessly, you can use the following code to do so:
var audioPlayer = AVAudioPlayer()
func startAudio() {
let filePath = NSBundle.mainBundle().pathForResource("fileName", ofType: "mp3")
let fileURL = NSURL.fileURLWithPath(filePath!)
do {
audioPlayer = try AVAudioPlayer.init(contentsOfURL: fileURL, fileTypeHint: AVFileTypeMPEGLayer3)
audioPlayer.numberOfLoops = -1
audioPlayer.volume = 1
} catch {
self.presentViewController(UIAlertController.init(title: "Error", message: "Error Message", preferredStyle: .Alert), animated: true, completion: nil)
}
audioPlayer.play()
}
Call this function wherever you want in your controller and the audio should start playing. Hope this helps.
Sorry for the maybe stupid question. I start to get my head into Fabric.js, but it´s hard for me because of the documentation.
Please look at the code below:
var canvas = new fabric.Canvas('c');
var iminst = new fabric.Image.fromURL ('./images/1stback.jpg', function(myimage){
myimage.left=0;
myimage.top=0;
canvas.add(myimage);
});
iminst.set('angle', 45);
The image is loaded and shown, but how do I address it afterwards.
I just get an Error
"TypeError: 'undefined' is not a function (evaluating 'iminst.set('angle', 45)')"
You're missing the basics. It seems like you haven't gone through the great tutorials available on the Fabric site.
The simple code to solve your issue would be:
var canvas = new fabric.Canvas('c');
var iminst;
fabric.Image.fromURL ('./images/1stback.jpg', function(myimage){
iminst=myimage;
myimage.left=0;
myimage.top=0;
canvas.add(myimage);
canvas.renderAll();
test();
});
function test(){
iminst.set('angle', 45); // you can refer it but not before the callback finished
}
Hope it helps if you haven't yet figured out the answer by yourself... gl