nativescript TNS Player giving an error while playing audio - audio

I am using nativescript to build that app. I want to use music player to play audio. I am using TNS Player https://github.com/nstudio/nativescript-audio. Its working fine for some .mp3 files. But when file size more than 70-80 MB than its giving below error.
TNSPlayer errorCallback,1
`{"player":{},"error":1,"extra":-1004}`
Function is
public async playAudio(filepath: string) {
const playerOptions: AudioPlayerOptions = {
audioFile: filepath,
loop: false,
completeCallback: async () => {
alert("Audio file complete.");
await this._player.dispose();
console.log("player disposed");
},
errorCallback: errorObject => {
console.log(JSON.stringify(errorObject));
},
infoCallback: args => {
console.log(JSON.stringify(args));
}
};
this._player.playFromUrl(playerOptions); }

Related

Bi-directional Websocket with RTK Query

I'm building a web-based remote control application for the music program Ableton Live. The idea is to be able to use a tablet on the same local network as a custom controller.
Ableton Live runs Python scripts, and I use this library that exposes the Ableton Python API to Node. In Node, I'm building an HTTP/Websocket server to serve my React frontend and to handle communication between the Ableton Python API and the frontend running Redux/RTK Query.
Since I both want to send commands from the frontend to Ableton Live, and be able to change something in Ableton Live on my laptop and have the frontend reflect it, I need to keep a bi-directional Websocket communication going. The frontend recreates parts of the Ableton Live UI, so different components will care about/subscribe to different small parts of the whole Ableton Live "state", and will need to be able to update just those parts.
I tried to follow the official RTK Query documentation, but there are a few things I really don't know how to solve the best.
RTK Query code:
import { createApi, fetchBaseQuery } from '#reduxjs/toolkit/query/react';
import { LiveProject } from '../models/liveModels';
export const remoteScriptsApi = createApi({
baseQuery: fetchBaseQuery({ baseUrl: 'http://localhost:9001' }),
endpoints: (builder) => ({
getLiveState: builder.query<LiveProject, void>({
query: () => '/completeLiveState',
async onCacheEntryAdded(arg, { updateCachedData, cacheDataLoaded, cacheEntryRemoved }) {
const ws = new WebSocket('ws://localhost:9001/ws');
try {
await cacheDataLoaded;
const listener = (event: MessageEvent) => {
const message = JSON.parse(event.data)
switch (message.type) {
case 'trackName':
updateCachedData(draft => {
const track = draft.tracks.find(t => t.trackIndex === message.id);
if (track) {
track.trackName = message.value;
// Components then use selectFromResult to only
// rerender on exactly their data being updated
}
})
break;
default:
break;
}
}
ws.addEventListener('message', listener);
} catch (error) { }
await cacheEntryRemoved;
ws.close();
}
}),
})
})
Server code:
import { Ableton } from 'ableton-js';
import { Track } from 'ableton-js/ns/track';
import path from 'path';
import { serveDir } from 'uwebsocket-serve';
import { App, WebSocket } from 'uWebSockets.js';
const ableton = new Ableton();
const decoder = new TextDecoder();
const initialTracks: Track[] = [];
async function buildTrackList(trackArray: Track[]) {
const tracks = await Promise.all(trackArray.map(async (track) => {
initialTracks.push(track);
// A lot more async Ableton data fetching will be going on here
return {
trackIndex: track.raw.id,
trackName: track.raw.name,
}
}));
return tracks;
}
const app = App()
.get('/completeLiveState', async (res, req) => {
res.onAborted(() => console.log('TODO: Handle onAborted error.'));
const trackArray = await ableton.song.get('tracks');
const tracks = await buildTrackList(trackArray);
const liveProject = {
tracks // Will send a lot more than tracks eventually
}
res.writeHeader('Content-Type', 'application/json').end(JSON.stringify(liveProject));
})
.ws('/ws', {
open: (ws) => {
initialTracks.forEach(track => {
track.addListener('name', (result) => {
ws.send(JSON.stringify({
type: 'trackName',
id: track.raw.id,
value: result
}));
})
});
},
message: async (ws, msg) => {
const payload = JSON.parse(decoder.decode(msg));
if (payload.type === 'trackName') {
// Update track name in Ableton Live and respond
}
}
})
.get('/*', serveDir(path.resolve(__dirname, '../myCoolProject/build')))
.listen(9001, (listenSocket) => {
if (listenSocket) {
console.log('Listening to port 9001');
}
});
I have a timing issue where the server's ".ws open" method runs before the buildTrackList function is done fetching all the tracks from Ableton Live. These "listeners" I'm adding in the ws-open-method are callbacks that you can attach to stuff in Ableton Live, and the one in this example will fire the callback whenever the name of a track changes. The first question is if it's best to try to solve this timing issue on the server side or the RTK Query side?
All examples I've seen on working with Websockets in RTK Query is about "streaming updates". But since the beginning I've thought about my scenario as needing bi-directional communication using the same Websocket connection the whole application through. Is this possible with RTK Query, and if so how do I implement it? Or should I use regular query endpoints for all commands from the frontend to the server?

getUserMedia notallowederror : permission denied

Um...
when i play 'getUserMedia' on my phone, i got an error
from alert(e)) notallowederror : permission denied
What should i do?
it is a part of the index.js code(where the video & chating happens)...
async function getMedia(deviceId) {
const initialConstraints = {
audio: true,
video: { facingMode: 'user' },
};
const cameraConstraints = {
audio: true,
video: { deviceId: { exact: deviceId } },
};
try {
myStream = await navigator.mediaDevices.getUserMedia(
deviceId ? cameraConstraints : initialConstraints
);
myFace.srcObject = myStream;
if (!deviceId) {
await getCameras();
}
} catch (e) {
console.log(e);
alert(e);
}
}
I'm using rails app(web), and react-native(webview app), node.js(for realtime chating and video call like zoom by socket.io webRTC)
I ran it on the webview-app(hybrid), and that doesn't working... (working weeeell in browser but..)
so i googling .. add the options..
video autoplay="" webkit-playsinline="webkit-playsinline" playsinline="playsinline" muted="true" id="myFace" width="350" height="400"
and video tag's parent is iframe, and it has attributes allow="camera;microphone;autoplay" and so on...
Also added the expo webview options like,,,
return useWebkit
allowInlineMediaPlayback={true}
mediaPlaybackRequiresUserAction={false}
javaScriptEnabled={true}
javaScriptEnabledAndroid
geolocationEnabled={true}
im so beginner.. can you help me out?? THanks!!!
If you get this error in Localhost or any site, you have to go to settings and search for microphone or webcam and enable/disable it manually,

How to set up node socket.io

I'm attempting to complete this tutorial and I'm getting this error in the browser console.
index.js:16 Uncaught TypeError: this.io.on is not a function
at index.js:16
This is the full contents of my index.js.
navigator.getUserMedia(
{ video: true, audio: true },
stream => {
const localVideo = document.getElementById("local-video");
if (localVideo) {
localVideo.srcObject = stream;
}
},
error => {
console.warn(error.message);
}
);
this.io.on("connection", socket => {
const existingSocket = this.activeSockets.find(
existingSocket => existingSocket === socket.id
);
if (!existingSocket) {
this.activeSockets.push(socket.id);
socket.emit("update-user-list", {
users: this.activeSockets.filter(
existingSocket => existingSocket !== socket.id
)
});
socket.broadcast.emit("update-user-list", {
users: [socket.id]
});
}
})
What am I missing? I know that 'this' should refer to some enclosing object but what?
Checkout tutorial files list
The part you are referrring to is in socket-connection.ts file, not index.js as author refers.
Or it may be a typescript abrakadabra :)
P.S. and you should also remember that "this" construct is always inside a function (but not in arrow function).

Expo-Audio is not working on iOS. Its stuck on recording button itself

github.com/expo/audio-recording-example
You can check it out about code.
I am using audio from expo-av.
It works fine with android devices even on emulator. On android device, it first asks for audio permission then started to record audio on click the on stop, it provides playback audio.
But testing on iOS, it does not asking for permission too, directly shows the audio recording page, and clicking on record button , recording doesn't started.
I can't understand whether its problem with iOS audio permission or syntax of audio.recording.
I've tried to set permission manually true
this.recordingSettings = JSON.parse(JSON.stringify(Audio.RECORDING_OPTIONS_PRESET_LOW_QUALITY));
// // UNCOMMENT THIS TO TEST maxFileSize:
// this.recordingSettings.android['maxFileSize'] = 12000;
}
_askForPermissions = async () => {
const response = await Permissions.askAsync(Permissions.AUDIO_RECORDING);
this.setState({
haveRecordingPermissions: response.status === 'granted',
});
};
async _stopPlaybackAndBeginRecording() {
this.setState({
isLoading: true,
});
if (this.sound !== null) {
await this.sound.unloadAsync();
this.sound.setOnPlaybackStatusUpdate(null);
this.sound = null;
}
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
playsInSilentModeIOS: true,
shouldDuckAndroid: true,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
playThroughEarpieceAndroid: false,
staysActiveInBackground: true,
});
if (this.recording !== null) {
this.recording.setOnRecordingStatusUpdate(null);
this.recording = null;
}
const recording = new Audio.Recording();
await recording.prepareToRecordAsync(this.recordingSettings);
recording.setOnRecordingStatusUpdate(this._updateScreenForRecordingStatus);
this.recording = recording;
await this.recording.startAsync(); // Will call this._updateScreenForRecordingStatus to update the screen.
this.setState({
isLoading: false,
});
}
_onRecordPressed = () => {
if (this.state.isRecording) {
this._stopRecordingAndEnablePlayback();
} else {
this._stopPlaybackAndBeginRecording();
}
};
I expect audio recording on iOS but gets stuck on isrecording.
I change my recording settings and everything is goof on both devices android and iOS.
My updated Settings.
this.recordingSettings = JSON.parse(JSON.stringify(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY: RecordingOptions = {
android: {
extension: '.m4a',
outputFormat: Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG_4,
audioEncoder: Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
},
ios: {
extension: '.m4a',
outputFormat: Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC,
audioQuality: Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_MIN,
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
linearPCMBitDepth: 16,
linearPCMIsBigEndian: false,
linearPCMIsFloat: false,
},
}));
This used this setting according to my requirement you can used another setting options too.. But just keep Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY same.

Get camera stream on embedded system

I have an embedded system with camera and gstreamer and I-m trying to get the stream of my camera. I have a web application built with aurelia and electron.
I tried with mediaDevices.getUserMedia but I get a NotFoundError, but usinge enumerateDevices I get the devices I need.
Can be a problem that the getUserMedia doesn-t work properly with Gstreamer? If I run the same project on my pc it works perfectly.
Here it is my HTML:
<video ref="videoPlayer" hide.bind="screenSharing" id="videoPlayer" autoplay muted></video>
And this is my js:
let j = 0;
navigator.mediaDevices.enumerateDevices()
.then((deviceInfos) => {
for (var i = 0; i !== deviceInfos.length; ++i) {
console.log(deviceInfos[i]);
if (deviceInfos[i].kind === 'videoinput') {
this.deviceInfo[j] = deviceInfos[i];
j++;
}
}
if (this.deviceInfo.length > 1) {
console.log(this.deviceInfo.length);
this.constraints = {
audio: true,
video: {
deviceId: { exact: this.deviceInfo[1].deviceId }
}
};
}
else {
console.log("Only one camera");
this.constraints = {
video: {
deviceId: { exact: this.deviceInfo[0].deviceId }
},
audio: true
};
console.log(this.constraints);
}
})
.then(() => {
navigator.mediaDevices.getUserMedia(this.constraints)
.then((stream) => {
console.log('Got mic+video stream', stream);
this.localStream = stream;
this.videoPlayer.srcObject = this.localStream;
})
.catch((err) => {
console.error(err);
});
})
}
I've seen on internet there some packages like livecam but no idea on how to use it.
I attach the output of mediaDevices.enumerateDevices:
console.log(navigator.mediaDevices.enumerateDevices())
VM149:1 Promise {[[PromiseStatus]]: "pending", [[PromiseValue]]: undefined}__proto__: Promise[[PromiseStatus]]: "resolved"[[PromiseValue]]:
Array(5)0: MediaDeviceInfodeviceId: "default"groupId: "6dbae3b74e14f5e239133b5feea86e5ae7a9741a3e3fd21a86eab9273fe135aa"kind: "audioinput"label: "Default"__proto__:
MediaDeviceInfo1: MediaDeviceInfodeviceId: "d415346fe3db142f8daa611ad3dedb298b5d94b70f4221c38e7e6582f45c3008"groupId: "8d82cc2495eebb4c40bb77a5e0287d4b365ac1de8205684eae39cb605a703f11"kind: "audioinput"label: "Built-in Audio Stereo"__proto__:
MediaDeviceInfo2: MediaDeviceInfodeviceId: "82378e03eff67ac471305e50ac95e629ebf441c1ab1819d6a36aca137e37e89d"groupId: ""kind: "videoinput"label: ""__proto__: MediaDeviceInfodeviceId: (...)groupId: (...)kind: (...)label: (...)toJSON: function toJSON()constructor: function MediaDeviceInfo()Symbol(Symbol.toStringTag): "MediaDeviceInfo"get deviceId: function ()get groupId: function ()get kind: function ()get label: function ()__proto__:
Object3: MediaDeviceInfodeviceId: "default"groupId: "default"kind: "audiooutput"label: "Default"__proto__:
MediaDeviceInfo4: MediaDeviceInfodeviceId: "31a7efff94b610d3fce02b21a319cc43e2541d56d98b4138b6e3fe854b0df38c"groupId: "391b1de381c11ab437d507abc0543f288dd29d999717dbb0e949c006ef120935"kind: "audiooutput"label: "Built-in Audio Stereo"__proto__:
MediaDeviceInfolength: 5__proto__: Array(0)
undefined

Resources