i'm developping app using spotify-iOS-SDK, i have succesfully connect my app to Spotify and my audio is playing, but the problem is: When i activated silent mode, there is no sound come from my app even though spotify music is still playing. I have checked other apps using Spotify-iOS-SDK (Demo Project, MusixMatch) and all of them can release the sound
Here's my code:
self.spotifyPlayer = SPTAudioStreamingController.sharedInstance()
self.spotifyPlayer!.playbackDelegate = self
self.spotifyPlayer!.delegate = self
try! spotifyPlayer?.start(withClientId: auth.clientID)
self.spotifyPlayer!.login(withAccessToken: authSession.accessToken)
then, this function will be called:
func audioStreamingDidLogin(_ audioStreaming: SPTAudioStreamingController!) {
let uri = "spotify:track:" + trackId
self.spotifyPlayer?.playSpotifyURI(uri, startingWith: 0, startingWithPosition: 0, callback: { (error) in
if (error != nil) {
print("error: \(error) ")
}
})
}
i have figured it out what i'm missing
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didChangePlaybackStatus isPlaying: Bool) {
print("isPlaying: \(isPlaying)")
if (isPlaying) {
try! AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try! AVAudioSession.sharedInstance().setActive(true)
} else {
try! AVAudioSession.sharedInstance().setActive(false)
}
}
Related
Please consider the code below:
navigator.mediaDevices.getUserMedia({audio: true}).then(function() {
navigator.mediaDevices.enumerateDevices().then((devices) => {
devices.forEach(function(device1, k1) {
if (device1.kind == 'audiooutput' && device1.deviceId == 'default') {
const speakersGroupId = device1.groupId;
devices.forEach(function(device2, k2) {
if (device2.groupId == speakersGroupId && ['default', 'communications'].includes(device2.deviceId) === false) {
const speakersId = device2.deviceId;
const constraints = {
audio: {
deviceId: {
exact: speakersId
}
}
};
console.log('Requesting stream for deviceId '+speakersId);
navigator.mediaDevices.getUserMedia(constraints).then((stream) => { // **this always fails**
console.log(stream);
});
}
});
}
});
});
});
The code asks for permissions via the first getUserMedia, then enumerates all devices, picks the default audio output then tries to get a stream for that output.
But it will always throw the error: OverconstrainedError { constraint: "deviceId", message: "", name: "OverconstrainedError" } when getting the audio stream.
There is nothing I can do in Chrome (don't care about other browsers, tested Chrome 108 and 109 beta) to get this to work.
I see a report here that it works, but not for me.
Please tell me that I'm doing something wrong, or if there's another way to get the speaker stream that doesn't involve chrome.tabCapture or chrome.desktopCapture.
Chrome MV3 extension ways are welcomed, not only HTML5.
.getUserMedia() is used to get input streams. So, when you tell it to use a speaker device, it can't comply. gUM's error reporting is, umm, confusing (to put it politely).
To use an output device, use element.setSinkId(deviceId). Make an audio or video element, then set its sink id. Here's the MDN example; it creates an audio element. You can also use a preexisting audio or video element.
const devices = await navigator.mediaDevices.enumerateDevices()
const audioDevice = devices.find((device) => device.kind === 'audiooutput')
const audio = document.createElement('audio')
await audio.setSinkId(audioDevice.deviceId)
console.log(`Audio is being played on ${audio.sinkId}`)
What we're trying to implement?
we deployed an AI model to stream the audio from microphone and display the text of the speech to the user. something like this.
What technologies are used?
Python for back-end and the AI model
React for front-end
web Media Recorder API to record and configure the audio
WebSocket to get connected to the AI API
What's the problem though?
In the front-end, I try to send audio chunks every second as an Int16Array to the back-end. also to make sure everything related to the mic and audio works fine, after stop recording I can download the first chunk of the audio only with duration of 1s which is pretty clear. However, when the audio is sanded to the backend it becomes to some bunch of noise!
Here's the part of the React code when the recording is getting handle:
useEffect(()=> {
if (recorder === null) {
if (isRecording) {
requestRecorder().then(setRecorder, console.error);
} else {
return;
}
}
// Manage recorder state.
if (isRecording && recorder) {
recorder.start();
} else if (!isRecording && recorder) {
recorder.stop();
}
// send the data every second
const ineterval = setInterval(() => {
if (recorder) {
recorder.requestData();
}
}, 1000);
// Obtain the audio when ready.
const handleData = e => {
setAudioURL(URL.createObjectURL(e.data));
let audioData = []
audioData.push(e.data)
const audioBlob = new Blob(audioData, {'type' : 'audio/wav; codecs=0' })
const instanceOfFileReader = new FileReader();
instanceOfFileReader.readAsArrayBuffer(audioBlob);
instanceOfFileReader.addEventListener("loadend", (event) => {
console.log(event.target.result.byteLength);
const arrayBuf = event.target.result
const int16ArrNew = new Int16Array(arrayBuf, 0, Math.floor(arrayBuf.byteLength / 2));
setJsonData(prevstate => ({...prevstate,
matrix: int16ArrNew,}))
})
};
if (recorder) {
recorder.addEventListener("dataavailable", handleData);
}
return () => {
if (recorder) {
recorder.removeEventListener("dataavailable", handleData)
clearInterval(ineterval)
}
};
}, [recorder, isRecording])
Is there anyone faced this issue before? had a lot of research about it but found nothing to fix this.
I need to launch the electron app or focus it ( if already launched ) from a browser link. I have searched and tried many solutions but not getting it to work, so if someone has any experience with it, can you please help?
Here is the code:
// Single instance app ==========
const gotTheLock = app.requestSingleInstanceLock();
if (!gotTheLock) {
app.quit();
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
// Someone tried to run a second instance, we should focus our window.
if (mainWindow) {
if (mainWindow.isMinimized()) mainWindow.restore();
mainWindow.focus();
}
});
}
// Register private URI scheme for the current user when running for the first time
app.setAsDefaultProtocolClient('x-protocol');
When I try to launch using this code, I get the goTheLock value as false, but the second-instance event is not getting fired, not sure why.
Version Details:
platform: Windows 10
electron: 8.5.3
electron-builder: 21.2.0
Update:
I added a delay of 5 seconds before quitting the app inside !gotTheLock, and in that case, I'm getting the event.
const gotTheLock = app.requestSingleInstanceLock();
if (!gotTheLock) {
delay(5000); // 5 seconds delay
app.quit();
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
// Someone tried to run a second instance, we should focus our window.
if (mainWindow) {
if (mainWindow.isMinimized()) mainWindow.restore();
mainWindow.focus();
}
});
}
I don't understand. If you want to launch the app from a browser link then why you're implementing a second-instance? second-instance will fire if you open an application for a second time.
like this,
const gotTheLock = app.requestSingleInstanceLock();
if (!gotTheLock) {
if (win) {
app.quit();
}
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
if (win) {
win.show();
win.focus();
}
})
}
I have successfully used the Apple Documentation to connect two players via Game Center and start the game. However, I have been struggling for days at getting the app to send data between two players.
I just need to send an integer between the two players but can't even get the documentation code to run, even after creating the structs etc. Examples I have looked at already are dated or I can't get them to build.
func sendPosition() {
let messageToSend = 123
//what do I need to do messageToSend to send it?
do {
try match.sendData(toAllPlayers: packet, with: .unreliable)
} catch {
}
if error != nil {
// Handle the error.
}
}
func match(_ match: GKMatch, didReceive data: Data, fromRemotePlayer player: GKPlayer) {
//What do I need to do to receive the data?
}
If anyone can help with some working code I can experiment with in Swift 5+ I would be grateful.
After some reading and playing my original code seemed to work!!! If it helps anyone else:
To send:
#IBAction func sendDataBtn(_ sender: Any) {
print("sending data")
let dataString = "Hello, World!"
let dataToSend = dataString.data(using: .utf8)
do {
try myMatch.sendData(toAllPlayers: dataToSend!, with: .reliable)
} catch {
print(error.localizedDescription)
}
}
To receive:
func match(_ match: GKMatch, didReceive data: Data, fromRemotePlayer player: GKPlayer) {
print("Data Received")
let receivedData = String(data: data, encoding: .utf8)
messageLbl.text = receivedData
}
I create a 'container' to send over data, this way I can add an instruction and what needs to be done in one go. For example;
var type:String = "jump"
var data:CGPoint = CGPoint(x:10,y:10)
let container:[Any] = [type, data]
do {
let dataToSend = try NSKeyedArchiver.archivedData(withRootObject: container, requiringSecureCoding: true)
try match.sendData(toAllPlayers: packet, with: .unreliable)
} catch {
}
if error != nil {
// Handle the error.
}
How can I add a track to the current play queue in a Spotify app?
You need to create an unnamed playlist to create your own play queue.
function playTracks(tracks, index) {
var pl = new models.Playlist();
for (var i = 0; i < tracks.length; ++i) {
pl.add(tracks[i]);
}
models.player.play(pl.uri, pl.uri, index);
}
The current play queue seems to be unavailable. But this snippet may be useful if your purpose is to build a queue...
// Create a name for a temporary playlist.
function temporaryName() {
return (Date.now() * Math.random()).toFixed();
}
function getTemporaryPlaylist() {
var temporaryPlaylist = sp.core.getTemporaryPlaylist(temporaryName());
sp.trackPlayer.setContextCanSkipPrev(temporaryPlaylist.uri, false);
sp.trackPlayer.setContextCanRepeat(temporaryPlaylist.uri, false);
sp.trackPlayer.setContextCanShuffle(temporaryPlaylist.uri, false);
return temporaryPlaylist;
}
var tpl = getTemporaryPlaylist();
tpl.add(trackUri);
tpl.add(track2Uri);
//...
sp.trackPlayer.playTrackFromContext(tpl.uri, 0, "", {
onSuccess: //...
onError: //...
onComplete: //...
});
Nothing in the Apps API reference suggests that it is possible. There is no mention of how to do this in any of the apps in the preview build either. The conclusion has to be that doing this is not currently supported.