So I'm thinking about creating a node application where users can add songs to a "queue" and have the songs be broadcasted to all users in real time, but after looking around I'm not quite sure how to accomplish this.
The primary article I read was this one: http://pedromtavares.wordpress.com/2012/12/28/streaming-audio-on-the-web-with-nodejs/
It seems like an icecast server could work very well for this, but is there a way for node to push songs to a queue to be played by the icecast server? So far from what I have read it seems the only way to manage songs played is to specify a playlist or add songs manually, and telling the server to not play anything when there are no songs in the queue also seems like a potential issue.
I've been working on a similar project recently. My solution was to use nodeshout (node binding for libshout) to send audio data from Node to Icecast.
Check out the streaming example. You can use it like so:
function playSong(){
// Choose next song
const nextSong = "./song.mp3";
const fileStream = new FileReadStream(nextSong, 65536);
const shoutStream = fileStream.pipe(new ShoutStream(shout));
shoutStream.on('finish',playSong);
}
playSong()
This will create a loop and play song after song.
Tip: Increase source timeout in your icecast.xml to ~30 seconds. In some cases, with the default, it causes the stream to end, due to songs having a "quick start", where the start of the song has a lower bitrate (to start playing faster).
I've made a Gist with a further example: https://gist.github.com/Cretezy/3623fecb1418e21b5d1f77db50fc7e07
Related
I have this Nuxt SPA. It has two img elements, each taking a URL to a streaming endpoint (using GET). Each img element has a counter attached to it and its number is updated on every "onload" event. The endpoint is located on a Flask server and works by streaming frames (PNG images) extracted from a video file (using OpenCV) using generators, yield, and the MIME type multipart/x-mixed-replace. Basically the same to the method described in: https://blog.miguelgrinberg.com/post/video-streaming-with-flask
It works with no issues, my problem now is performance. If there is only one stream working (one GET request, one connection), everything is fine performance-wise. But when there are two streams working in parallel (two GET requests, two incoming connections at the same time), the App struggles and it lags very hard: It freezes on displaying one frame, updating the counter. Then after a while the counter increases quickly by 100 or so and it displays a frame 100 or so frames ahead from the previous one. This happens in an alternating fashion between the 2 img elements. So element 1 will load 100 or so then freeze, then element 2 will do the same and freeze, rinse and repeat.
Does anyone have an idea what could cause this? I need to improve the performance so both "streams" can run at the same time without having such insane lags.
I think that it might be the two connections competing against each other so I have been thinking of sending multiple GET requests instead. The response would be a batch (array) of let's say 100-200 frames. Then a function in the app would play the frames at 30 FPS or so. Once the array is almost empty, then it will make a new GET request for the next batch of frames. Rinse and repeat, and do it in an alternating fashion between the 2 img elements.
Do you think this will alleviate the issue? Or am I solving the wrong problem?
Its hard to answer in detail without looking at the exact HW and SW being used on the client as there are many factors, such as bandwidth, ability to run multiple threads in parallel, fetching optimisations in browsers, HW vs SW codecs etc that might affect performance.
However, one high level thing that may help, if you are able to use it for your use case, would be to try to leverage the existing optimisations that many servers and clients have for video streaming and playback.
In other words, given that your source is essentially a stream of video frames, if you can combine them into an 'actual' video stream then the browser can simply request that video stream and the server can simply serve a video.
This allows you leverage all the inbuilt mechanisms to download the video in 'chunks', using range requests and/or Adaptive Bit rate streaming.
The client will also be able to leverage existing buffering and playback mechanisms for video.
Most common laptop/desktop machines should be able to handle two parallel videos being played back at the same time so this could take away the playback pain you are seeing, at the cost of more work on the server side to package the frames into video streams.
I'm trying to create cool visualization to music using the Spotify Web API(https://developer.spotify.com/documentation/web-api/reference/).
What I'm trying, is to first fetch what the user is playing, what's their progress and then I fetch the track analysis as well.
Fetching the currently played song is possible on this endpoint: https://developer.spotify.com/documentation/web-api/reference/player/get-the-users-currently-playing-track/
The important stuff for me from the response is basically this:
{
"id": H7sbas7Gda98sh...., //ID of the song
"timestamp": 1490252122574, //unix timestamp when the data is fetched
"progress_ms": 42567 //The progress of the track in milliseconds
}
Obviously some time elapses between the request and the time I parse the response in my application. So the plan is that I synchronize the music this way:
const auto current_time = get_current_unix_timestamp();
const auto difference = current_time - timestamp; //the timestamp that is in the response
const offset = progress_ms + difference; //the progress_ms that is in the response
The theory is cool but it seems like the clock between the servers and on my system is not synchronized because I usually get values like -1638 for difference which is obviously not good because that would mean that I parsed the data sooner than it was fetched.
So my question is that what options do I have to synchronize my clock to Spotify servers? If it's not possible what options do I have to be able to synchronize the music properly? I couldn't find anything in Spotify documentation, although it should be possible because there are already some existing applications that do the same what I'm trying to do(e.g.: https://www.kaleidosync.com/)
It seems that synchronization is not currently practical, because while the docs say that the "timestamp" field signifies when the API gave you the data, it actually does not do what it says because of some issue on their side: https://github.com/spotify/web-api/issues/1073
Instead the timestamp seems to change only when there is a new song starting, pause, or seek. This means that as it is, we cannot know when the API response was generated.
The end goal: I'm creating a discord bot built for Final Fantasy XIV raiding. You simply tell the bot when you pull the boss and it follows a pre-defined timeline that warns everyone of each mechanic shortly before it happens. In short: It plays a pre-defined list of audio files at pre-defined times after receiving a !start command.
The good news is that the audio is playing.
The bad news is...complicated.
There are two issues and I have a feeling they're related. The first issue is that, no matter what audio file I play, the last bit (about 0.5s) gets cut off. I gave it an audio file that says "Tank buster soon" and it played "Tank buster s-" then cut out. Up until now, I've been working around this by simply adding one second of silence on the end of every sound file. It's been working. It's still getting truncated, of course, it's just that it's truncating silence.
The second issue is that, after playing one audio file, the next audio file has a short delay between when the bot tries to start playing it, and when the audio actually comes out. (In discord, I can see this as the bot cueing up their "mic" a short time before it starts playing audio.) This delay gets progressively worse with every file played, to the point where it's literally several seconds delayed. (When the delay is severe enough, I see the bot cue up for about a second, un-cue, and then re-cue when the delay finally finishes)
The code doing most of the work is as follows:
//timeline() is called once per second by a setInterval() object, while the fight is active.
function timeline()
{
tick++;
var len = callouts.length;
for (var i = 0; i < len; i++)
{
if (tick == callouts[i]["time"])
{
dispatcher[dispatcherIndex] = voiceConnection.playFile(callouts[i]["file"]);
dispatcherIndex++;
activeChannel.send(callouts[i]["message"]);
}
}
} //end timeline()
Discord.js documentation mentions that it prefers node-opus to be used for voice functionality, but that opusscript will also work. I cannot seem to install the former, so I am using the latter.
EDIT: I've found a workaround for now. The progressive delay is "reset" when the bot leaves and rejoins voice, so I've simply made it do so when it finishes playing an audio file. Strictly speaking, it works, but I'd still like a "proper" solution if possible.
Installing git (why the heck have I not installed git long ago?) ultimately turned out to be the correct catalyst. That, as mentioned, let me install discord.js master branch, which handles voice better. After a couple ffmpeg-related hiccups, I now have the bot playing complete audio files with no delay, with no workarounds required. Everything's working great now!
We are developing an app for the Spotify platform. We have a problem with the player context.
We offer our users radio stations. These radio stations are not static playlists, they are created dynamically in runtime. The radio playing process is as follows:
We play the first track with the player's playTrack method.
Then, our algorithm determines the next track to be played and sends it to the client
After the currently playing track finishes playing, we load the new track again with player's playTrack method.
The process works fine if the player has no context prior to starting our radios. But if there is a context already (for example, user starts playing a playlist on Spotify, and then starts a radio with our app), the player continues to play the previous context.
playTrack method does not change the current context of the player. Is there a way for playing a single track using the playContext method, or destroying the context of the player?
I'd say that populating a temporary playlist (Playlist.createTemporary) would be the more straightforward implementation. Then your playlist would become the context.
You can still limit the number of songs provided (seems to be desirable in your case) because you can dynamically add new songs to the end of the playlist while it is going. You can also remove songs from the beginning as you go.
I have a playlist, and I want to sequentially play through the tracks, but every time a new track is loaded, I want to call a function. How would I go about listening for this event?
SPPlaybackManager, the playback class in CocoaLibSpotify, doesn't automatically play tracks sequentially, so you have to manually tell it to play each time. Since you're managing that, you already know when a new track is starting playback.
Additionally, SPPlaybackManagerDelegate has a method -playbackManagerWillStartPlayingAudio:, which will let you know when audio starts hitting the speakers.