I have a midlet that upon discovering something displays some information, vibrates, flashes the screen, and makes a sound - all to get the user's attention. The problem is that the sound is not loud enough.
how do i make the phone produce the loudest sound it can ? I prefer not to add a sound file unless that's the key. I prefer to use standard j2me library, but can settle for Nokia's library if absolutely needed.
I am mainly targeting Nokia S60 or S40.
currently, the best i can come up with is this:
Manager.playTone(ToneControl.C4, duration, 100);
But you can hardly hear the sound this makes on some phones.
Related
I want to create a game with sound effects. When I start the game, the background music should be played until the game is over. When I click on something in the game (such as buttons), a sound effect should be played but the background music is stopped.
How can I make the background music play continuously while the sound effect from object is playing?
I already have these scripts...
Card script...
on openCard
play "backgroundmusic.wav" looping
end openCard
Buttons (or any object)...
on mouseup
play "sound.wav"
end mouseup
How to play these sounds together?
Update: I found a game uploaded to Game Jam. This game was ranked #1. When I play the game, the sound was amazing that it has background music and sound effects. But the owner of this game doesn't upload the livecode stack file in order to study it. The game was entitled Space Shooter Game. The sounds of this game is what I expect.
Note:
As what I figured out from the answers, using the player object can be work. But this requires QuickTime which I don't have that installed in my PC. I want also the sound to be able to play in mobile devices.
As it stands, the soundChannel property has no effect in LiveCode and is only provided for Hypercard compatibility.
Currently on desktop there are two ways to do multi-channel sound: 1) play imported sounds as one channel, and use a player object as the second channel, or 2) use two player objects.
Typically, a good option is to import short sounds as sound effects into a stack that only play once, and reserve the player object for background music. Imported sounds usually play with the least latency, however, you cannot play multiple imported simultaneously -- attempting to play a second sound while a first is playing will stop the first to play the second. If you have a need to play asynchronous sound effects, this option will not work; you must use a combination of playback options.
Multiple players can be used, but note that there can be some latency during the process of loading a sound (assigning a sound's filepath to a player) and playing it.
Also note that truly seamless playback of of a track is difficult if not impossible -- LiveCode will at some point become susceptible to some system event that will cause a slight pause between loops. A while back, Trevor Devore made an addition to his Enhanced QuickTime external that enabled true seamless looping of audio. However, with Apple getting rid QuickTime, it's unknown how much longer this option will be useful.
With the enhancements that the RunRev guys have been making to the engine, it's likely we'll see improvement with media playback and management, hopefully sooner rather than later.
In the LiveCode forums, they suggest using player objects on the card instead and telling them to play.
In HyperCard, you could set the soundChannel property for that. Have you checked in the LiveCode documentation whether it supports that? The docs for the play command and the the sound property might also help. Maybe those contain hints. FWIW, in HC
set the soundChannel to 1
play "BackgroundMusic"
set the soundChannel to 2
play "SoundEffect"
would play the sound effect and background music at the same time. Maybe that's how it works in LiveCode as well?
The multimedia capabilities are a going through a transformation. Previously everything was built around QuickTime (well almost everything) and you needed to add a player control for each concurrent sound. Currently the whole foundation is changed as Apple dropped QuickTime, but assuming you develop for desktop you should still (again) be able to add a player object and then use:
start player "name of player"
You can also create player object dynamically by
create player "my player""
and then use
set the filename of player "my player to "/path/to/your/audio/file"
before staring your sound. And as long as you have different players for your different sounds they should play simultaneously.
on openCard
put specialFolderPath("engine") & "/soundfx/backgroundmusic.wav" into tSound
mobilePlaySoundOnChannel tSound, "Background", "looping"
end openCard
on mouseup
play "sound.wav"
end mouseup
I'm trying to make a video tutorial, so i decided to record the speeches using a TTS online service.
I use Audacity to capture the sound, and the sound was clear !
After dinning, i wanted to finish the last speeches, but the sound wasn't the same anymore, there is a background noise(parasite) which is disturbing, i removed it with Audacity, but despite this, the voice isn't the same ...
You can see here the difference between the soundtrack of the same speech before and after the occurrence of the problem.
The codec used by the stereo mix peripheral is "IDT High Definition Codec".
Thank you.
Perhaps some cable or plug got loose? Do check for this!
If you are using really cheap gear (built-in soundcard and the likes) it might very well also be a problem of electrical interference, anything from ...
Switching on some device emitting a electro magnetic field (e.g. another monitor close by)
Repositioning electrical devices on your desk
Changes in CPU load on your computer (yes i'm serious!)
... could very well cause some kinds of noises with low-fi sound hardware.
Generally, if you need help on audio sounding wrong make sure that you provide a way to LISTEN to the files, not just a visual representation.
Also in your posted waveform graphics i can see that the latter signal is more compressed, which may point to some kind of automated levelling going on somewhere in the audio chain.
Is there a difference between audio.play() and media.play() and which one is better?
The audio.* API calls use the OpenAL audio layer to play. They are considered a safer and better way to play audio in Corona SDK. You can have 32 different sounds playing at once. You can control the volume on each channel independently, pause and resume, fade in, fade out, etc. It is the preferred way to play sound.
The media.* API calls write directly to the hardware and you cannot control the volume, have multiple sounds going on. The media.* API Calls though are good for video, playing long clips, like podcasts since that audio can be backgrounded, but more importantly, on Android, Google has decided to poorly implement OpenAL and under 4.x there is a significant lag from the time you tell audio.play() to play a sound and it really happening. The lag isn't as bad under 2.2 and 2.3, but there still is a lag. The media.* api calls, if you're playing a short clip will play in a timely fashion.
media API:Only one sound can be playing using this sound API. Calling this API with a different sound file will stop the existing sound and play the new sound.
When i try to take a screenshot of my desktop I found the area of the Windows Media Player window was empty, nothing in it, I google for it for a while and found that most of video players user Overlay surfaces for performance, and overlay surfaces can not be caputured, so some ideas come out said to disable the DDraw accelaration so that you can grap an still image from a live video, but when the player was launched, it's already use the hardware accelaration, even i disable hardware accelaration, it will not take effect until i relaunch the player, my question is: how to capture a image from a live video without diasble the ddraw accelaration? or how to make the settings(disable hardware accelaration) work work without relaunch the video player?
I won't play the vedio with my program, i just want to take a still
image while it is played by a 3rd party player such as Windows Media
Player or Real player etc...
I want to do this programatically, say
by C/C++ and DirectX, so I don't want to use any exsisting software
or tools
No matter which player in use, my program should capture it, I know some tool can do this like CapTrue and tencent qq, so i think it is possible to do so.
A workaround can be to use vlc to play your file. It gives a screenshot option in it directly.
AFAIK, this is an intentional "feature" in WMP, for protection. If you need to have WMP, then you need a decent screengrabber. Unfortunately, the ones I know like hypersnap are not free.
If you only want a screengrab of a frame, VLC is your friend, like #zdd said.
Is it possible to disable noise cancellation for the microphone in Android (specifically 1.5) via code?
I want to create a dumb MicrophoneApp that records all the background noises, but I believe that noise cancellation for the microphone is getting in the way. I know you can do it if you root your phone and edit settings (ie this article), but I want to make it without root the phone.
Noise filters in audio recording sources on Android vary greatly from device to device. It isn't until Ice Cream Sandwich that any sort of definition was put into the device compatibility document defining a method for not having filtering. That method id to use the MediaRecorder.AudioSource.VOICE_RECOGNITION audio source. Before that it's just choose a setting and hope for the best. I've found that some devices work better with MIC and some with VOICE_RECOGNITION prior to 4.0. HTC seems to have started the use fo VOICE_RECOGNITION as a no-filter zone pre-ICS.
Since there is no loop-back audio interface you can't even detect it but you can surface different audio paths to the user to choose from.
i dont think without rooting the phone you can change microphone behaviour. noise cancellation is more a function of second microphone than some software, and to alter some hardware you would require super user privileges.
Ok, noise detection and cancellation is done using two microphones and android simply differentiate both signals coming from each one of them and get the right signal of the speaker, Sony Ericsson Neo have the noise mic in the back of the phone, simply you can disable the second mic and you will get the full signal.