I'm making a game in XNA 4.0 and I want two sound effects to play in a row to form a sentence (e.g. The first sound effect may say "You have selected" while the second says "Game number 1"). The current code that I'm using makes the two sound effects play at the same time. How can I make them play one after the other?
soundEffectOne.Play();
soundEffectTwo.Play();
If you're calling both in the same update, then it makes sense that they'd both play at once.
You'll need to add in some functionality to check that the first sound effect has finished playing and then call the second effect.
Related
I am developing on a linux system using latest (at the moment) SDL2 (2.0.8) + openGL ES 2.0 (GLSL 1.0) eventually targeting a raspberry pi 3 board. I have so far done a few things like drawing text with freetype, drawing lines, text boxes (editable), text lists, waveform boxes (all i need to pass to a function is an array of vertices) and other shapes with glDrawArrays(). Now, there are things that need to be refreshed at, let's say, 10 times per sec and others that need 1 time per second. What would be the best approach to skip re-rendering everything at the rate of 10 times per sec? Because obviously openGL works by drawing everything from scratch on every 'frame'. However i know and you know that other approaches exist that include: rendering on top of the screen you already have or taking a screenshot and rendering on top of it only the fast changing things as well as other solutions. What do you thing would be the best approach to skip re-doing everything before calling SDL_GL_SwapWindow() ? How can i take a screen shot and render it on the invisible buffer then render only the fast changing objects and then call SDL_GL_SwapWindow() ?
This is a screen shot of the app so far drawing basic things
Thanks in advance.
i eventually had to realize that i should not have posted the question in the first place but since this is a place where people learn from others i now feel somewhat nicer :) . So, the thing i had to do was to simply stop clearing the invisible buffer (i will call it that for simplicity) and render on top of it only controls that change. Those that change are updated by covering the area that they take by a rectangle and then draw new stuff on that area. I have already done it and the frame rate just 'exploded'. I do not really think that there is a better approach since the way i do it requires no action at all. All i had to do was to add a few if conditions that selectively rendered or skipped every time the execution reached the point where functions iterate through the controls that have to be drawn on screen and therefore decide what to render and what not. However a well thought set of structures is required for every control instead of declaring and defining endlessly global variables which will only makes things confusing and difficult to maintain.
Regards to all.
I want to create a game with sound effects. When I start the game, the background music should be played until the game is over. When I click on something in the game (such as buttons), a sound effect should be played but the background music is stopped.
How can I make the background music play continuously while the sound effect from object is playing?
I already have these scripts...
Card script...
on openCard
play "backgroundmusic.wav" looping
end openCard
Buttons (or any object)...
on mouseup
play "sound.wav"
end mouseup
How to play these sounds together?
Update: I found a game uploaded to Game Jam. This game was ranked #1. When I play the game, the sound was amazing that it has background music and sound effects. But the owner of this game doesn't upload the livecode stack file in order to study it. The game was entitled Space Shooter Game. The sounds of this game is what I expect.
Note:
As what I figured out from the answers, using the player object can be work. But this requires QuickTime which I don't have that installed in my PC. I want also the sound to be able to play in mobile devices.
As it stands, the soundChannel property has no effect in LiveCode and is only provided for Hypercard compatibility.
Currently on desktop there are two ways to do multi-channel sound: 1) play imported sounds as one channel, and use a player object as the second channel, or 2) use two player objects.
Typically, a good option is to import short sounds as sound effects into a stack that only play once, and reserve the player object for background music. Imported sounds usually play with the least latency, however, you cannot play multiple imported simultaneously -- attempting to play a second sound while a first is playing will stop the first to play the second. If you have a need to play asynchronous sound effects, this option will not work; you must use a combination of playback options.
Multiple players can be used, but note that there can be some latency during the process of loading a sound (assigning a sound's filepath to a player) and playing it.
Also note that truly seamless playback of of a track is difficult if not impossible -- LiveCode will at some point become susceptible to some system event that will cause a slight pause between loops. A while back, Trevor Devore made an addition to his Enhanced QuickTime external that enabled true seamless looping of audio. However, with Apple getting rid QuickTime, it's unknown how much longer this option will be useful.
With the enhancements that the RunRev guys have been making to the engine, it's likely we'll see improvement with media playback and management, hopefully sooner rather than later.
In the LiveCode forums, they suggest using player objects on the card instead and telling them to play.
In HyperCard, you could set the soundChannel property for that. Have you checked in the LiveCode documentation whether it supports that? The docs for the play command and the the sound property might also help. Maybe those contain hints. FWIW, in HC
set the soundChannel to 1
play "BackgroundMusic"
set the soundChannel to 2
play "SoundEffect"
would play the sound effect and background music at the same time. Maybe that's how it works in LiveCode as well?
The multimedia capabilities are a going through a transformation. Previously everything was built around QuickTime (well almost everything) and you needed to add a player control for each concurrent sound. Currently the whole foundation is changed as Apple dropped QuickTime, but assuming you develop for desktop you should still (again) be able to add a player object and then use:
start player "name of player"
You can also create player object dynamically by
create player "my player""
and then use
set the filename of player "my player to "/path/to/your/audio/file"
before staring your sound. And as long as you have different players for your different sounds they should play simultaneously.
on openCard
put specialFolderPath("engine") & "/soundfx/backgroundmusic.wav" into tSound
mobilePlaySoundOnChannel tSound, "Background", "looping"
end openCard
on mouseup
play "sound.wav"
end mouseup
Is there a way to trigger an audio file to play in the After Effects timeline when a layer has visible content.
It's a small click sound and when the text layer IN point is reached, I simply want the click wav file to play. Any help would be appreciated.
You have to use scripts to change anything other than keyframe-able layer and effect parameters. You might be able to fake a "click" effect with an expression by triggering a momentary change in the volume of a constant noise audio layer, and using markers to trigger it.
I think using the start times of other layers is problematic because writing an expression that would check any number of layers would involve some kind of for-loop that could get complicated, and you can't easily pass values or variables among different expressions. The question with expressions in AE is always whether the solution saves you time in the long run over just doing it manually, so it depends on your needs.
The quickest way to do it would probably be to just pre-comp your sound effect and whatever layer it needs to match, so that each time the pre-comp plays, you also get the click.
try pressing period(.) After effects dosent let you listen to audio while scrubbing, due to the fact your not looking at the true frame rate. So if you click RAM Preview and play you timeline you will hear you audio files. But for your instance if you press period(.) it will override and play your audio file. I use it when placing a small accent or foley sound.
i don't really know if it is actually possible, but i believe that it can be made. How possible is it to make a program that recognizes different sound bouncing from the screen and turn it into a position that will obviously be later fed to the mouse.
I know that it sounds kind of dumb, but lately i've been noticing that a very dull, strong sound is made when touching the screen, and that sound varies when doing so at different positions. Probably the microphone "hears" differently because the screen acts as a drum with the casing. Anyways, what do you think, anyone has any experience programming with sound?
First of all most domestic touch screens work by detecting pressure based on a criss-cross mesh layer underneath the display layer.
However I have seen an example where a touch interface was interrogated onto a pane of glass, it used 4 microphones to determine the corners, when you tapped a certain part of the screen it measures the delay in the sound getting to each microphone, therefore allowing one to triangulate the touch.
This is the methodology you would use, you don't even need to set up the hardware to test it, you could throw up an interface in VB, when you click in a box it sends out a circular wave and just calculate using the times it takes to reach the 4 points where the pointer is.
EDIT
As nikie suggested, drag & drop, or any kind of gestures would be impossible using the microphone method, as the technique needs a wave of sound to detect the input.
http://computer.howstuffworks.com/question7161.htm
I don't know if this will get you far, but you can investigate the techniques used in MIDI drums for returning various nuances of play.
I have two sound files:
Sound A is an 18 second intro designed to be played once
Sound B is a 1 minute looping track
I'd like to play Sound A once, then once Sound A is done, immediately play Sound B and keep looping Sound B until I tell it to stop. This is supposed to be looping town music in an RPG.
I've tried doing this in code using just SoundEffect, but there's a tiny yet noticeable gap between the end of Sound A and the beginning of Sound B. Even if I put monitoring code watching Sound A's SoundEffectInstance.State in the Update() function, I haven't been able to start Sound B exactly when Sound A finishes so that it's seamless.
I'd prefer to use SoundEffect because I can load WMA files rather than being stuck with WAVs in XACT.
A potentially second option. I'd imagine that the gap you're hearing is probably because the second sound needs to be either loaded into memory, or initialized, or the stream opened (not sure of the internal implementation). But if this is the case, I wonder if you could do something like this:
Load the two sound effects (a and b)
Begin playing sound effect A
immediately begin playing sound effect B
pause sound effect B after one frame
when sound effect A finishes, restart sound effect A
My assumption is that since sound effect B is already initialized, it may start up quicker, thus lessening the perceived gap. Would love to hear if you get a chance to try this, and whether it worked :-)
Unfortunately, simplicity in the SoundEffect API is paid for by a reducing the flexibility. This type of audio programming is what XACT excels in ... if you require a complex composition then you probably want to investigate moving to XACT.