simple synchronous loading of an audio file (.mp3) in a cocos2d app makes my vmware disconnect the sound.
the error is display bottom right, saying 'error in creating sound stream; sound is disconnected'
i read that it might be cause of my vmware's version (mine is 8) but I'm looking for a fix, not to downgrade to another version.
before i get that error, the sound on the system works just fine (youtube, etc)
the exact code im calling is..
[CDSoundEngine setMixerSampleRate: CD_SAMPLE_RATE_MID];
[[CDAudioManager sharedManager] setResignBehavior: kAMRBStopPlay autoHandle:Yes];
soundEngine = [SimpleAudioEngine sharedEngine];
[soundEngine preloadBackgroundMusic:#"somemp3.mp3"];
[soundEngine playBackgroundMusic:#"somemp3.mp3"];
maybe the bit rate is too high .. ?
thanks
There was a problem in the vmware..
Related
Attempting to play a signal through computer speakers via python so I tried the example in 7.1 of the manual. I see the signal on the plot but hear nothing over the speakers. Is SoundSink the wrong approach for this? I'm running REDHAWK 2.0.6 on CentOS 7. In case this is important, the first time sb.start() is called, "shared memfd open() failed: Invalid argument" is displayed. When called a second time that message doesn't appear. I am able to play audio from within the REDHAWK IDE.
from ossie.utils import sb
import frontend
sim = sb.launch("rh.FmRdsSimulator")
demod=sb.launch("rh.AmFmPmBasebandDemod")
filter=sb.launch("rh.fastfilter")
resample=sb.launch("rh.ArbitraryRateResampler")
agc=sb.launch("rh.agc")
sink=sb.SoundSink()
plot=sb.Plot()
sim.connect(demod)
demod.connect(filter, usesPortName="fm_dataFloat_out")
filter.connect(resample)
resample.connect(agc)
agc.connect(sink)
agc.connect(plot)
sim.addAWGN=False
demod.freqDeviation=15000.0
filter.filterProps.freq1=16000.0
filter.filterProps.Ripple=0.5
filter.filterProps.Type="lowpass"
resample.outputRate=32000.0
sb.start()
alloc = frontend.createTunerAllocation(
"RX_DIGITIZER",
allocation_id="testing",
center_frequency=100.1e6,
sample_rate=256e3,
sample_rate_tolerance=20.0)
sim.allocateCapacity(alloc)
The package that SoundSink uses to generate sound has been deprecated in CentOS 7 (but is still available in CentOS 6). This issue has been added to the backlog and will be fixed in a future release.
I am trying to get started using Haskell's Euterpea library. My first goal was to get it to play a given sound file (e.g. mp3 or wav), but first I ran into an issue following instructions to get it to just play a simple note sound in ghci.
Following the "Setting up MIDI" instructions at Euterpea's web page, I ran
import Euterpea
play $ c 4 qn
in ghci. The 'play' command resulted in the following error message:
Prelude Euterpea> play $ c 4 qn
*** Exception: No MIDI output device found
CallStack (from HasCallStack):
error, called at ./Euterpea/IO/MIDI/MidiIO.lhs:122:18 in Euterpea-2.0.2-Iz37iWlkpjn2emP4FnvOI1:Euterpea.IO.MIDI.MidiIO
I thought I needed to specify midi output to my machine (macOS Sierra) and found an application called 'Audio MIDI Setup', but it showed that a midi output (my internal speakers) was already specified.
Anyone know what this issue is or how to fix it?
Perhaps you solved this, but for posterity some ideas:
Sounds like you didn't install and run a MIDI synth (e.g. SimpleSynth) first. AFAICT, Audio MIDI Setup doesn't actually include a software synthesizer, it's more for advanced / hardware MIDI setup.
This should create the MIDI output devices that Euterpea couldn't find there. You may also need to play around with channels (e.g. use playDev n instead of play and work out a value for n from your device list... or even just try 1 through 8)
Either way there's some good Mac-focussed help on Donya's working with MIDI on Mac OS X page. Hope that helps.
I'm trying to get my game to automatically set the window size as the correct resolution for the monitor.
For example, my desktop PC is at 1920x1080 resolution, so I want my game to run at 1920x1080 on here, however my laptop is at 1366x768 so I want my game to run at 1366x768 on there, etc.
I've tried so many different things such as GraphicsDevice.Adapter.CurrentDisplayMode.Width/Height, and even printed out the list of GraphicsDevice.Adapter.SupportedDisplayModes and they all tell me that the only display mode supported for me is 800x600. This is surely not the case, because I'm running my Windows 7 at 1920x1080.
So what on earth am I doing wrong? I tried putting this code in the Game1 constructor, the initialiser, I can't figure out why it isn't working properly!
Okay I fixed it. I just realised I was being a little bit stupid in that I forgot to mention this a "MonoGame" application, not a straight forward XNA project... (I didn't think it would make a difference but oh I was wrong)..
As it turns out, MonoGame has a massive bug to do with the graphics devices, and there is supposedly a way to solve it (build from the latest source or something?) but what I did was install the XNA 4.0 Refresh for Visual Studio 2013, and copied all my source code across to a new XNA project as opposed to a MonoGame project.
And hey presto, GraphicsDevice.DisplayMode.Width and Height are now correctly registering as 1920 and 1080 pixels. So now I can carry on with my game FINALLY.
Thanks to all the people that tried to help me solve this issue!
You can set the resolution of your game in the constructor by adjusting the graphics' PreferredBackBufferWidth and PreferredBackBufferHeight:
For example this will produce a game window that's 480x320:
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
graphics.PreferredBackBufferHeight = 320;
graphics.PreferredBackBufferWidth = 480;
}
Keep in mind that when in windowed mode your game will (by default) have a title bar which prevents the game window from being as big as your full screen.
This is my method on how to get your maximum supported resolution(and set it, as an example to clarify it):
// in the Initialize method
graphics.PreferredBackBufferWidth = GraphicsDevice.DisplayMode.Width;
graphics.PreferredBackBufferHeight = GraphicsDevice.DisplayMode.Height;
graphics.IsFullScreen = false;
graphics.ApplyChanges(); // <-- not needed in the Game constructor
However, I don't know what you're doing wrong.
I use the audio class to read MP3 file thanks to a little trick : replacing the ffmpegsumo.so of Node-Webkit with the chromium one. This enable the reading of MP3 on Windows but doesn't works on Mac OS. Does anyone know why ?
Here's the code :
player = new Audio()
player.src = '/path/to/the/audio.mp3';
player.play();
This seems to be dependant upon the dll/so being a 32 bit version. I am guessing that is why copying the file from Chrome doesn't work correctly for most people ( my 3 year old phone is the only 32-bit device I have left ).
I keep seeing this link --
https://github.com/rogerwang/node-webkit/wiki/Support-mp3-and-h264-in-video-and-audio-tag
.. but it is a blank page. I am guessing it was deleted since the info was likely not current or correct.
This issue thread has links to some rebuilt ffmpegsumo libraries for both Mac and Windows --
https://github.com/rogerwang/node-webkit/issues/1423
The alternative appears to be rebuilding ffmpegsumo, this thread has some config for doing that -- https://github.com/rogerwang/node-webkit/issues/1208
I am still confused about the licensing on it after you build the library, so that is probably worth some research. Everything about mpeg4-part10 is copyrighted and heavily patent encumbered. I think we all need to get smart enough to stop using mp4/h.264. Before I got this working correctly on node-webkit, it was easier to use ffmpeg to transcode the video to an ogv container using Theora and Vorbis codecs. At this point it seems like iOS is keeping h.264 alive, when it should probably die the horrible death it has earned.
I'm having some trouble using OpenAL in cocos2d v3. Here's some context:
What I'm using: Xcode 5 + cocos2d v3 + iPhone 5 (iOS 7).
What I want to do: Create a multi-audio-source scene with positional audio (just pan + volume for now).
Why didn't I use OALSimpleAudio? Couldn't get the various sources to change pan and volumes as the listener moves.
What have I done so far? I have loaded an ALBuffer with an audio file and queued that buffer in a ALSource. When I run the code, nothing ever plays. Right after the [soundSource play:soundBuffer ...] line, I have checked if the sound was playing with [soundSource playing] and got a negative response. Also tried to instantiate an ALListener, with the same position as the source, but nothing changed.
Any help regarding this problem would be appreciated. Any suggestions about what different approaches I should be looking forward to are welcome as well.
Thank You!