I'm using the SpotifyAPI-NET on GitHub from JohnnyCrazy to play and pause songs on my Spotify desktop client. This works fine.
Now I want to change the playing position of the currently playing song. So I only want to say something like "SetPlayingPosition(64)" to play the current song from position "01:04". It seems that the SpotifyLocalAPI didn't support this feature.
To play and pause a song the API uses a message with the following format:
http://127.0.0.1:4381/remote/pause.json?pause=true&ref=&cors=&_=1520448230&oauth=oauth&csrf=csrf
I tried to find a summary of possible commands in this format, but I didn't find anything.
Is there something like http://127.0.0.1:4381/remote/seek.json... that I can use to seek to a specific position?
EDIT:
I tried to write my own method in the RemoteHandler class in the local portion of the SpotifyAPI. With this method I can set the position in the current playback.
Here's my code:
internal async Task SendPositionRequest(double playingPositionSec) //The desired playback position in seconds
{
StatusResponse status = GetNewStatus(); //Get the current status of the local desktop API
string trackUri = "spotify:track:" + status.Track.TrackResource.ParseUri().Id; //The URI of the current track
TimeSpan playingPositionTimeSpan = TimeSpan.FromSeconds(playingPositionSec);
string playingPosStr = playingPositionTimeSpan.ToString(#"mm\:ss"); //Convert the playingPosition to a string (Format mm:ss)
string playingContext = "spotify:artist:1EfwyuCzDQpCslZc8C9gkG";
await SendPlayRequest(trackUri + "#" + playingPosStr, playingContext);
if (!status.Playing) { await SendPauseRequest(); }
}
I need to call the SendPlayRequest() method with the correct playingContext because when the current song is part of a playlist and you call SendPlayRequest() without the context, the next song isn't from the playlist anymore.
But you can see that I use a fixed context at the moment.
So my question is now: How can I get the context (playlist, artist, ...) of the currently played song with the SpotifyLocalAPI?
The SeekPlayback method of the library you mentioned lets you seek through playback on whatever device your user is listening on. You can find the docs here.
Seeking playback is not currently possible using the Spotify Local API portion of that library.
Related
I'm using pixi-sound.js and want to be able to skip to a specific point in the audio file. I've achieved this before using HTML5 audio by updating the currentTime, but I'm not sure where to access this with pixi-sound. There are at least two currentTime values, as well as 'progress', in the object, but changing those doesn't cause a skip.
var sound = PIXI.sound._sounds['track01'];
var currenttime = sound.media.context.audioContext.currentTime;
I would have thought this would be a common usage thing, but can't find any reference to it in the documents. Any ideas much appreciated.
In PixiJS Sound, you can hand over a options object as an argument for play method of an instance of Sound class. And the value of options.start is the "start time offset in seconds".
References: #pixi/sound v4.2.0 source, pixi-sound v3.0.5 source
In your code, if you want to play the sound from the point of 10 seconds offset, I think you can try like the following:
var sound = PIXI.sound._sounds['track01'];
sound.play({ start: 10 }); // play from 10 seconds offset
I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.
I implemented chrome extension which using chrome.desktopCapture.chooseDesktopMedia to retrieve screen id.
This is my background script:
chrome.runtime.onConnect.addListener(function (port) {
port.onMessage.addListener(messageHandler);
// listen to "content-script.js"
function messageHandler(message) {
if(message == 'get-screen-id') {
chrome.desktopCapture.chooseDesktopMedia(['screen', 'window'], port.sender.tab, onUserAction);
}
}
function onUserAction(sourceId) {
//Access denied
if(!sourceId || !sourceId.length) {
return port.postMessage('permission-denie');
}
port.postMessage({
sourceId: sourceId
});
}
});
I need to get shared monitor info(resolution, landscape or portrait).
My question is: If customer using more than one monitor, how can i determine which monitor he picked?
Can i add for example "system.display" permissions to my extension and get picked monitor info from "chrome.system.display.getInfo"?
You are right. You could add system.display permission and call chrome.system.display.getDisplayLayout(callbackFuncion(DisplayLayout)) and handle the DisplayLayout.position in the callback to get the layout and the chrome.system.display.getInfo to handle the array of displayInfo in the callback. You should look for 'isPrimary' value
This is a year old question, but I came across it since I was after the same information and I finally managed to figure how you can identify which monitor the user selected for screen-sharing in Chrome.
First of all: this information will not come from the extension that you probably built for screen-sharing in Chrome, because:
The chrome.desktopCapture.chooseDesktopMedia API callback only returns a sourceId, which is a string that represents a stream id, that you can then use to call the getMediaSource API to build the media stream.
The chrome.system.display.getInfo will give you a list of the displays, yes, but from that info you can't tell which one is being shared, and there is no way to match the sourceId with any of the fields returned for each display.
So... the solution I've found comes from the MediaStream object itself. Once you have the stream, after calling getMediaSource, you need to get the video track, and in there you will find a property called "label". This label gives you an idea of which screen the user picked.
You can get the video track with something like:
const videoTrack = mediaStream.getVideoTracks()[0];
(Check the getVideoTracks API here: https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/getVideoTracks).
If you print that object, you will see the "label" field. In Chrome screen 1 shows as "0:0", whereas screen 2 shows as "1:0", and I assume screen i would be "i-1:0" (I've only tested with 2 screens).
Here is a capture of that object printed in the console:
And not only works for Chrome, but for other browsers that implement it! In Firefox they show up as "Screen i":
Also, if you check Chrome chrome://webrtc-internals you'll see this is what they show in the addStream event:
And that's it! It's not ideal, since this is a label, more than a real screen identifier, but well, it's something to work with. Once you have the screen identified, in Chrome you can work with the chrome.system.display.getInfo to get information for that display.
I am using JPlayer which plays different audio files based on the the user input. Every time a user enters an input, I am calling a REST web service to retrieve the audio file to play. The response from the REST service is a byte[].
What I am trying to achieve is to save this array of byte in memory instead of writing it to a file and use that byte[] for jplayer. I am not sure how to get JPlayer to play a byte[].
var file = [[${audiofile}]]
$(document).ready(function(){
$("#jquery_jplayer_1").jPlayer(
{
ready: function () {
$(this).jPlayer("setMedia",{
wav: file
});
},
the variable file evaluates to a byte[]. When trying to play the audio, I see the following error in console.
Uncaught TypeError: Object
-1,-5,-112,-60,0,3,19,124,-73.......
I will appreciate if somebody has any suggestions.
Thanks
Unfortunately, this feature looks unimplemented right now. From the source code of JPlayer 2.9.2 from line 1945:
setMedia: function(media) {
/* media[format] = String: URL of format. Must contain all of the supplied option's video or audio formats.
* media.poster = String: Video poster URL.
* media.track = Array: Of objects defining the track element: kind, src, srclang, label, def.
* media.stream = Boolean: * NOT IMPLEMENTED * Designating actual media streams. ie., "false/undefined" for files. Plan to refresh the flash every so often.
*/
Please notice the last line. Wish I had better news.
REF: https://github.com/happyworm/jPlayer/blob/master/src/javascript/jplayer/jquery.jplayer.js
Some fellows came close to a solution, but not with JPlayer. REF: How to play audio byte array (not file!) with JavaScript in a browser
I have a EmbeddedMediaPlayerComponent and I want to check before playing if the video has audio track.
The getMediaPlayer().getAudioTrackCount() method works fine but only when I play the video and I am inside the public void playing(MediaPlayer mp) event.
I also tryed
getMediaPlayer().prepareMedia("/path/to/media", null);
getMediaPlayer().play();
System.out.println("TRACKS: "+getMediaPlayer().getAudioTrackCount());
But it does not work. it says 0.
I also tryed:
MediaPlayerFactory factory = new MediaPlayerFactory();
HeadlessMediaPlayer p = factory.newHeadlessMediaPlayer();
p.prepareMedia("/path/to/video", null);
p.parseMedia();
System.out.println("TRACKS: "+p.getAudioTrackCount());
But it also says -1. Is there a way I can do that ? or using another technique?
The track count is not metadata, so using parseMedia() here is not going to help.
parseMedia() will work to get e.g. ID3 tag data, title, artist, album, and so on.
The track data is usually not available until after the media has started playing, since it is the particular decoder plugin that knows how many tracks there are. Even then, it is not always available immediately after the media has started playing, sometimes there's an indeterminate delay (and no LibVLC event).
In applications where I need the track information before playing the media, I usually would use something like the native MediaInfo application and parse the output - this has a plain-text out format, or an XML output format and IIRC the newer versions have a JSON output format. The downside is you have to launch a native process to do this, I use CommonsExec for things like this. It's pretty simple and does work even though it's not a pure Java solution, but neither is vlcj!
A slight aside if you did actually want the meta data there is an easier way, just use
this method on the MediaPlayerFactory:
public MediaMeta getMediaMeta(String mediaPath, boolean parse);
This gives you the meta data without having to prepare, play or parse media.