I'm using pixi-sound.js and want to be able to skip to a specific point in the audio file. I've achieved this before using HTML5 audio by updating the currentTime, but I'm not sure where to access this with pixi-sound. There are at least two currentTime values, as well as 'progress', in the object, but changing those doesn't cause a skip.
var sound = PIXI.sound._sounds['track01'];
var currenttime = sound.media.context.audioContext.currentTime;
I would have thought this would be a common usage thing, but can't find any reference to it in the documents. Any ideas much appreciated.
In PixiJS Sound, you can hand over a options object as an argument for play method of an instance of Sound class. And the value of options.start is the "start time offset in seconds".
References: #pixi/sound v4.2.0 source, pixi-sound v3.0.5 source
In your code, if you want to play the sound from the point of 10 seconds offset, I think you can try like the following:
var sound = PIXI.sound._sounds['track01'];
sound.play({ start: 10 }); // play from 10 seconds offset
Related
Good day to you all,
I have a problem, maybe someone can provide some helpful ideas on how to implement or if it's even possible:
I want to record RTSP stream from an IP-cam and I would like to add some text information and logo into the recording so it might be viewed when played back.
To do so I have first created one MediaPlayer element to connect to the IP-cam, duplicate onto the display, and recast via UDP.
using (var stream01_view = new Media(libVLC, "rtsp://192.168.10.214:5554",FromType.FromLocation))
{
stream01_view.AddOption(
":sout=#duplicate{" +
"dst=display{noaudio}," +
"dst=std{access=udp,mux=ts,dst=:1234}");
stream01_view.AddOption(":sout-keep");
player.Play(stream01_view);
}
The second stream connects to the local UDP cast and outputs to file
using (var stream01_record = new Media(libVLC, "udp://#:1234", FromType.FromLocation))
{
stream01_record.AddOption(":sout=#transcode{sfilter=marq}:file{mux=ts,dst=VideoMarqLogo.mp4}");
stream01_record.AddOption(":sout-keep");
recorder.Play(stream01_record);
}
Calling class MediaPlayer methods SetMarqueeInt and SetMarqueeString don't give the expected result.
Thanks mfkl for pointing to the right direction.
So the thing that does the trick is:
stream01_record.AddOption(":sout=#transcode{ vcodec=h264, scale=0.75, " +
"sfilter=marq{file='marq.txt',position=9}," +
"vfilter=logo{file='logo.png',position=6}}" +
":file{mux=ts,dst=VideoMarqLogo.mp4}");
A bit of a warning though, this piece of code is CPU intensive.
I'm left wondering if there could be a way to do this using GPU encoding.
I'm using the SpotifyAPI-NET on GitHub from JohnnyCrazy to play and pause songs on my Spotify desktop client. This works fine.
Now I want to change the playing position of the currently playing song. So I only want to say something like "SetPlayingPosition(64)" to play the current song from position "01:04". It seems that the SpotifyLocalAPI didn't support this feature.
To play and pause a song the API uses a message with the following format:
http://127.0.0.1:4381/remote/pause.json?pause=true&ref=&cors=&_=1520448230&oauth=oauth&csrf=csrf
I tried to find a summary of possible commands in this format, but I didn't find anything.
Is there something like http://127.0.0.1:4381/remote/seek.json... that I can use to seek to a specific position?
EDIT:
I tried to write my own method in the RemoteHandler class in the local portion of the SpotifyAPI. With this method I can set the position in the current playback.
Here's my code:
internal async Task SendPositionRequest(double playingPositionSec) //The desired playback position in seconds
{
StatusResponse status = GetNewStatus(); //Get the current status of the local desktop API
string trackUri = "spotify:track:" + status.Track.TrackResource.ParseUri().Id; //The URI of the current track
TimeSpan playingPositionTimeSpan = TimeSpan.FromSeconds(playingPositionSec);
string playingPosStr = playingPositionTimeSpan.ToString(#"mm\:ss"); //Convert the playingPosition to a string (Format mm:ss)
string playingContext = "spotify:artist:1EfwyuCzDQpCslZc8C9gkG";
await SendPlayRequest(trackUri + "#" + playingPosStr, playingContext);
if (!status.Playing) { await SendPauseRequest(); }
}
I need to call the SendPlayRequest() method with the correct playingContext because when the current song is part of a playlist and you call SendPlayRequest() without the context, the next song isn't from the playlist anymore.
But you can see that I use a fixed context at the moment.
So my question is now: How can I get the context (playlist, artist, ...) of the currently played song with the SpotifyLocalAPI?
The SeekPlayback method of the library you mentioned lets you seek through playback on whatever device your user is listening on. You can find the docs here.
Seeking playback is not currently possible using the Spotify Local API portion of that library.
As explained here, OffsetSampleProvider can be used in order to play a specific portion of an audio file. Like this:
AudioFileReader AudioReader = new AudioFileReader("x.wav");
OffsetSampleProvider OffsetProvider = New OffsetSampleProvider(AudioReader);
OffsetProvider.SkipOver = TimeSpan.FromSeconds(5);
OffsetProvider.Take = TimeSpan.FromSeconds(8);
myWaveOut.Init(OffsetProvider);
myWaveOut.Play();
The above example will play an audio for 8 seconds, starting at second 5. However, if I want to play it again, it will not play, unless I set the Position property of the AudioFileReader to 0, and re-create a new instance of OffsetSampleProvider from it. So I would like to know if I'm missing something, or this is the way that OffsetSampleProvider should be used (and if it does, do I have to free any resources related to it).
You could copy the code for OffsetSampleProvider and add a Reset method to it. I'd also avoid using SkipOver for performance reasons and just set the CurrentTime of the AudioFileReader to 5 seconds directly before you play.
I have a EmbeddedMediaPlayerComponent and I want to check before playing if the video has audio track.
The getMediaPlayer().getAudioTrackCount() method works fine but only when I play the video and I am inside the public void playing(MediaPlayer mp) event.
I also tryed
getMediaPlayer().prepareMedia("/path/to/media", null);
getMediaPlayer().play();
System.out.println("TRACKS: "+getMediaPlayer().getAudioTrackCount());
But it does not work. it says 0.
I also tryed:
MediaPlayerFactory factory = new MediaPlayerFactory();
HeadlessMediaPlayer p = factory.newHeadlessMediaPlayer();
p.prepareMedia("/path/to/video", null);
p.parseMedia();
System.out.println("TRACKS: "+p.getAudioTrackCount());
But it also says -1. Is there a way I can do that ? or using another technique?
The track count is not metadata, so using parseMedia() here is not going to help.
parseMedia() will work to get e.g. ID3 tag data, title, artist, album, and so on.
The track data is usually not available until after the media has started playing, since it is the particular decoder plugin that knows how many tracks there are. Even then, it is not always available immediately after the media has started playing, sometimes there's an indeterminate delay (and no LibVLC event).
In applications where I need the track information before playing the media, I usually would use something like the native MediaInfo application and parse the output - this has a plain-text out format, or an XML output format and IIRC the newer versions have a JSON output format. The downside is you have to launch a native process to do this, I use CommonsExec for things like this. It's pretty simple and does work even though it's not a pure Java solution, but neither is vlcj!
A slight aside if you did actually want the meta data there is an easier way, just use
this method on the MediaPlayerFactory:
public MediaMeta getMediaMeta(String mediaPath, boolean parse);
This gives you the meta data without having to prepare, play or parse media.
Ok, I hope I don't mess this up, I have had a look for some answers but can't find anything. I am trying to make a simple sampler in openframeworks using the FMOD sound player in 3D mode. I can make a single instance work fine (recording a new file using libsndfilerecorder and then playing it back and moving it in surround.
However I want to have 8 layers of looping audio that I can record and replace one layer at a time in a live show. I get a lot of problems as soon as I have more than 1 layer.
The first part of my question relates to the FMOD 3D modes, it is listener relative, so I have to define the position of my listener for every sound (I would prefer to have head relative mode but I cannot make this work at all. Again this works fine when I am using a single player but with multiple players only the last listener I update actually works.
The main problem I have is that when I use multiple players I get distortion, and often a mix of other currently playing sounds (even when the microphone cannot hear them) in my new recordings. Is there an incompatability with libsndfilerecorder and FMOD?
Here I initialise the players
for (int i=0; i<CHANNEL_COUNT; i++) {
lvelocity[i].set(1, 1, 1);
lup[i].set(0, 1, 0);
lforward[i].set(0, 0, 1);
lposition[i].set(0, 0, 0);
sposition[i].set(3, 3, 2);
svelocity[i].set(1, 1, 1);
//player[1].initializeFmod();
//player[i].loadSound( "1.wav" );
player[i].setVolume(0.75);
player[i].setMultiPlay(true);
player[i].play();
setupHold[i]==false;
recording[i]=false;
channelHasFile[i]=false;
settingOsc[i]=false;
}
When I am recording I unload the file and make sure the positions of the player that is not loaded are not updating.
void fmodApp::recordingStart( int recordingId ){
if (recording[recordingId]==false) {
setupHold[recordingId]=true; //this stops the position updating
cout<<"Start recording Channel " + ofToString(recordingId+1)+" setup hold is true \n";
pt=getDateName() +".wav";
player[recordingId].stop();
player[recordingId].unloadSound();
audioRecorder.setup(pt);
audioRecorder.setFormat(SF_FORMAT_WAV | SF_FORMAT_PCM_16);
recording[recordingId]=true; //this starts the libSndFIleRecorder
}
else {
cout<<"Channel" + ofToString(recordingId+1)+" is already recording \n";
}
}
And I stop the recording like this.
void fmodApp::recordingEnd( int recordingId ){
if (recording[recordingId]=true) {
recording[recordingId]=false;
cout<<"Stop recording" + ofToString(recordingId+1)+" \n";
audioRecorder.finalize();
audioRecorder.close();
player[recordingId].loadSound(pt);
setupHold[recordingId]=false;
channelHasFile[recordingId]=true;
cout<< "File recorded channel " + ofToString(recordingId+1) + " file is called " + pt + "\n";
}
else {
cout << "Sorry track" + ofToString(recordingId+1) + "is not recording";
}
}
I am careful not to interrupt the updating process but I cannot see where I am going wrong.
Many Thanks
to deal with the distortion, i think you will need to lower the volume of each channel on playback, try setting the volume to 1/8 of the max volume. there isn't any clipping going on so if the sum of sounds > 1.0f you will clip and it will sound bad.
to deal with crosstalk when recording: i guess you have some sort of feedback going on with the output, ie the output sound is being fed back into the input channel, probably by the operating system. if you run another app that makes sound do you also get that in your recording as well? if so then that is probably your problem.
if it works with one channel, try it with just 2, instead of jumping straight up to 8 channels.
in general i would try to abstract out the playback/record logic and soundPlayer/recorder into a separate class. you have a couple of booleans there and it's really easy to make mistakes with >1 boolean. is there any way you can replace the booleans with an enum or an integer state variable?
EDIT: I didn't see the date on your question :D Suppose you managed to do it by now. Maybe it helps somebody else..
I'm not sure if I can answer everything of your question, but I can share how I've worked with 3D sound in FMOD. I haven't worked with recording though.
For my own application a user can place sounds in 3D space around himself. For this I only have one Listener and multiple Sounds. In your code you're making a listener for every sound, are you sure that is necessary? I would imagine that this causes the multiple listeners to pick up multiple sounds and output that to your soundcard. So from the second sound+listener, both listeners pick up both sounds? I'm not a 100% sure but it sounds plausible to me.
I made a class to create sound objects (and one listener). Then I use a vector to store the objects and move trough them to render them.
My class SoundBox basically holds all the necessary things for FMOD
Making a "SoundBox" object and adding it to my soundboxes vector:
SoundBox * box = new SoundBox(box_loc, box_rotation, box_color);
box->loadVideo(ofToDataPath(video_files[soundboxes.size()]));
box->loadSound(ofToDataPath(sound_files[soundboxes.size()]));
box->setVolume(1);
box->setMultiPlay(true);
box->updateSound(box_loc, box_vel);"
box->play();
soundboxes.push_back(box);
Constructor for the SoundBox. I use a similar constructor in the same class for the listener, but since the listener will always be at the origin for me, it doesn't take any arguments and just sets all the listener locations to 0. The constructor for the listener only gets called once, while the one for the Sound gets called whenever I want to make a new one. (don't mind the box_color. I'm drawing physical boxes in this case..):
SoundBox::SoundBox(ofVec3f box_location, ofVec3f box_rotation, ofColor box_color) {
_box_location = box_location;
_box_rotation = box_rotation;
_box_color = box_color;
sound_position.x = _box_location.x;
sound_position.y = _box_location.y;
sound_position.z = _box_location.z;
sound_velocity.x = 0;
sound_velocity.y = 0;
sound_velocity.z = 0;
Then I just use a for loop to loop trough them and play them if they're not playing. I also have some similar code to select them and move then around.
for(auto box = soundboxes.begin(); box != soundboxes.end(); box++){
if(!(*box)->getIsPlaying())
(*box)->play();
}
I really hoped this helped. I'm not a very experienced programmer but this is how I got FMOD with multiple sounds to work in OpenFrameworks and hope you can use some of it. I just dumped as much of my code as I could :D
My main suggestion is to make one listener instead of more. Also having a class for making the sounds is useful if you, for instance, want to relocate the sounds after the initial placement.
Hope it helps and good luck :)