Getting positional audio in cocos2d v3 - audio

I'm having some trouble using OpenAL in cocos2d v3. Here's some context:
What I'm using: Xcode 5 + cocos2d v3 + iPhone 5 (iOS 7).
What I want to do: Create a multi-audio-source scene with positional audio (just pan + volume for now).
Why didn't I use OALSimpleAudio? Couldn't get the various sources to change pan and volumes as the listener moves.
What have I done so far? I have loaded an ALBuffer with an audio file and queued that buffer in a ALSource. When I run the code, nothing ever plays. Right after the [soundSource play:soundBuffer ...] line, I have checked if the sound was playing with [soundSource playing] and got a negative response. Also tried to instantiate an ALListener, with the same position as the source, but nothing changed.
Any help regarding this problem would be appreciated. Any suggestions about what different approaches I should be looking forward to are welcome as well.
Thank You!

Related

Will --vout=dummy option work with --video-filter=scene?

I am trying to create snapshots from a video stream using the "scene" video filter. I'm on Windows for now, but this will run on Linux I don't want the video output window to display. I can get the scenes to generate if I don't use the --vout=dummy option. When I include that option, it does not generate the scenes.
This example on the Wiki indicates that it's possible. What am I doing wrong?
Here is the line of code from the LibVLCSharp code:
LibVLC libVLC = new LibVLC("--no-audio", "--no-spu", "--vout=dummy", "--video-filter=scene", "--scene-format=jpeg", "--scene-prefix=snap", "--scene-path=C:\\temp\\", "--scene-ratio=100", $"--rtsp-user={rtspUser}", $"--rtsp-pwd={rtspPassword}");
For VLC 3, you will need to disable hardware acceleration which seems incompatible with the dummy vout.
In my tests, it was needed to do that on the media rather than globally:
media.AddOption(":avcodec-hw=none");
I still have mainy "Too high level or recursion" errors, and for that, I guess you'd better open an issue on videolan's trac.

Convert an RTSP/RTMP-Livestream with G.711 audio into RTMP/RTSP with aac-audio

im new at this forum and my english skills are not the best!
I have a website where i publish the videostreams of the cameras to show what happens inside during the nesting-time live! An guy with high IT-skills has build me a little Server for Restream it (Datarhei-Restreamer) But this guy has still no time and worse response-times...
To my Problem: The Restreamer dont support the "G.711" Audio-Codec from the cameras and the Livestream are still without audio at the website. So, i need to convert the Livestreams (RTSP and RTMP- in H.264) so that the audio changes to "aac" or something other supported. But i have no plan how to do this. I tried it with FFMPEG but i dont find the correct commands to get the my result. There is something with an Streaming-server to send the new created stream to - i dont get it into my head to do this (i need just a stream that are viewable with VLC player and then as input for my restreamer-server, jsut the same like ca
I want to change the source-stream into the correct codec (audio from G.711 to AAC, the rest like source) and then, put this "new" stream into my Restreamer-Server and it will work fine! (Tested with XSplitbroadcaster, but dont runs on Raspberry, only 1 instance runable but 2 livestreams needs to be encoded at same time) And this programm has annoying bugs (endless and not removeable error-messages, but running stream)
I have a new second raspberry that are planned as "live-encoder" for the restreamer-raspberry were the "new" streams are are going in (rtmp/rtsp-input on a graphical ui) I try it still with FFMPEG but still no result...
Sorry about this long text with all the language-issues but im really frustrated with it because i have purchased 2 new cameras with total 450 euros just to get the livestream with sound now :(
Finally, I found the best solution here and it works (https://github.com/datarhei/restreamer/issues/11). Inside the long discussion, use the solution written by svenerbeck on 4 Apr 2016. The essential part is written below.
Create a new live.json in /mnt/live.json with the upcoming modification:
"ffmpeg": {
"options": {
"native_h264": [
"-vcodec copy",
"-acodec aac",
"-f flv"
],
.....
Exec the container with
docker run ... -v /mnt/live.json:/restreamer/conf/live.json ....

XNA: I only have 1 supported display mode (800x600)

I'm trying to get my game to automatically set the window size as the correct resolution for the monitor.
For example, my desktop PC is at 1920x1080 resolution, so I want my game to run at 1920x1080 on here, however my laptop is at 1366x768 so I want my game to run at 1366x768 on there, etc.
I've tried so many different things such as GraphicsDevice.Adapter.CurrentDisplayMode.Width/Height, and even printed out the list of GraphicsDevice.Adapter.SupportedDisplayModes and they all tell me that the only display mode supported for me is 800x600. This is surely not the case, because I'm running my Windows 7 at 1920x1080.
So what on earth am I doing wrong? I tried putting this code in the Game1 constructor, the initialiser, I can't figure out why it isn't working properly!
Okay I fixed it. I just realised I was being a little bit stupid in that I forgot to mention this a "MonoGame" application, not a straight forward XNA project... (I didn't think it would make a difference but oh I was wrong)..
As it turns out, MonoGame has a massive bug to do with the graphics devices, and there is supposedly a way to solve it (build from the latest source or something?) but what I did was install the XNA 4.0 Refresh for Visual Studio 2013, and copied all my source code across to a new XNA project as opposed to a MonoGame project.
And hey presto, GraphicsDevice.DisplayMode.Width and Height are now correctly registering as 1920 and 1080 pixels. So now I can carry on with my game FINALLY.
Thanks to all the people that tried to help me solve this issue!
You can set the resolution of your game in the constructor by adjusting the graphics' PreferredBackBufferWidth and PreferredBackBufferHeight:
For example this will produce a game window that's 480x320:
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
graphics.PreferredBackBufferHeight = 320;
graphics.PreferredBackBufferWidth = 480;
}
Keep in mind that when in windowed mode your game will (by default) have a title bar which prevents the game window from being as big as your full screen.
This is my method on how to get your maximum supported resolution(and set it, as an example to clarify it):
// in the Initialize method
graphics.PreferredBackBufferWidth = GraphicsDevice.DisplayMode.Width;
graphics.PreferredBackBufferHeight = GraphicsDevice.DisplayMode.Height;
graphics.IsFullScreen = false;
graphics.ApplyChanges(); // <-- not needed in the Game constructor
However, I don't know what you're doing wrong.

Node-Webkit read MP3 files

I use the audio class to read MP3 file thanks to a little trick : replacing the ffmpegsumo.so of Node-Webkit with the chromium one. This enable the reading of MP3 on Windows but doesn't works on Mac OS. Does anyone know why ?
Here's the code :
player = new Audio()
player.src = '/path/to/the/audio.mp3';
player.play();
This seems to be dependant upon the dll/so being a 32 bit version. I am guessing that is why copying the file from Chrome doesn't work correctly for most people ( my 3 year old phone is the only 32-bit device I have left ).
I keep seeing this link --
https://github.com/rogerwang/node-webkit/wiki/Support-mp3-and-h264-in-video-and-audio-tag
.. but it is a blank page. I am guessing it was deleted since the info was likely not current or correct.
This issue thread has links to some rebuilt ffmpegsumo libraries for both Mac and Windows --
https://github.com/rogerwang/node-webkit/issues/1423
The alternative appears to be rebuilding ffmpegsumo, this thread has some config for doing that -- https://github.com/rogerwang/node-webkit/issues/1208
I am still confused about the licensing on it after you build the library, so that is probably worth some research. Everything about mpeg4-part10 is copyrighted and heavily patent encumbered. I think we all need to get smart enough to stop using mp4/h.264. Before I got this working correctly on node-webkit, it was easier to use ffmpeg to transcode the video to an ogv container using Theora and Vorbis codecs. At this point it seems like iOS is keeping h.264 alive, when it should probably die the horrible death it has earned.

audio error in vmware running mac os x

simple synchronous loading of an audio file (.mp3) in a cocos2d app makes my vmware disconnect the sound.
the error is display bottom right, saying 'error in creating sound stream; sound is disconnected'
i read that it might be cause of my vmware's version (mine is 8) but I'm looking for a fix, not to downgrade to another version.
before i get that error, the sound on the system works just fine (youtube, etc)
the exact code im calling is..
[CDSoundEngine setMixerSampleRate: CD_SAMPLE_RATE_MID];
[[CDAudioManager sharedManager] setResignBehavior: kAMRBStopPlay autoHandle:Yes];
soundEngine = [SimpleAudioEngine sharedEngine];
[soundEngine preloadBackgroundMusic:#"somemp3.mp3"];
[soundEngine playBackgroundMusic:#"somemp3.mp3"];
maybe the bit rate is too high .. ?
thanks
There was a problem in the vmware..

Resources