I'm trying to create a music game using godot, it's similar to playing a piano, the player presses keys, each key has a sound assigned, and the music plays. What I want is to have a kind of recorder inside the game to be able to save the melody created and export it to some file like mp3 or wav. I'm using gdScript and I've been researching about this but wasn't able to find anything.
This issue has two parts, first of all, we need to actually reproduce the audio, and second we need to record it and save it a file. And we have two options for playing the audio... Let us begin there.
Playing
Playing audio samples
As you probably know you can use AudioStreamPlayer(2D/3D) to play audio. If you have samples an instrument, you can set up multiple of these, one for each note. And then play them individually by calling play.
Alright, but what if you don't have a sample for each note? Just one.
Well, you can use pitch_scale on your AudioStreamPlayer(2D/3D), but that also shortens the note. Instead…
Look at the bottom of the Godot editor, there is a panel called Audio. There you can configure audio buses. Add a new bus, add the PitchShift effect to it. If you select the effect, you can configure it on the Inspector panel. Now, in your AudioStreamPlayer(2D/3D), you can select the bus that has the effect, and there you go.
Except, you would have to setup a lot of these. So, let us do that from code instead.
First, adding an AudioStreamPlayer(2D/3D), is adding a node:
var player := AudioStreamPlayer.new()
add_child(player)
We want to set the sample, so we also need to load the sample:
var sample := preload("res://sample.ogg")
var player := AudioStreamPlayer.new()
player.stream = sample # <--
add_child(player)
By the way, double check in the import settings of your sample (With the file selected in the FileSystem panel, go to the Import panel) if it has loop enabled or not.
To set the audio bus, you can do this:
var sample := preload("res://sample.ogg")
var player := AudioStreamPlayer.new()
player.stream = sample
player.bus = "Bus Name" # <--
add_child(player)
And that brings me to adding an audio bus:
AudioServer.add_bus()
Wait, hmm… Ok, let us put the bus explicitly at the end, so we know the index:
var bus_index := AudioServer.get_bus_count()
AudioServer.add_bus(bus_index)
And give it a name:
var bus_index := AudioServer.get_bus_count()
AudioServer.add_bus(bus_index)
AudioServer.set_bus_name(bus_index, "Bus Name") # <--
And let us route it to Master:
var bus_index := AudioServer.get_bus_count()
AudioServer.add_bus(bus_index)
AudioServer.set_bus_name(bus_index, "Bus Name")
AudioServer.set_bus_send(bus_index, "Master") # <--
Yes, it is set to Master by default. But you might need to change that.
And now we can add the effect:
var pitch_shift_effect := AudioEffectPitchShift.new()
pitch_shift_effect.pitch_scale = 0.1
AudioServer.add_bus_effect(bus_index, pitch_shift_effect, 0)
That 0 at the end is the index of the effect. Since this is the only effect in a newly created audio bus, it goes in the index 0.
And, well, you can make a loop and add the players with their corresponding audio buses with their pitch shift effects.
Generating audio
Alright, perhaps you don't have a sample at all. Instead you want to generate waves. We can do that too.
This time we are going to have a AudioStreamPlayer(2D/3D), and give it an AudioStreamGenerator (you find it in the drop down menu from the stream property in the Inspecto panel). Edit the resource to set the mix rate you want to work with, by default it is 44100. But we are going to be generating this audio as close as real time as possible, so a lower rate might be necesary.
Now, in a script, we are going to get the playback object form the AudioStreamPlayer(2D/3D):
onready var _playback := $Player.get_stream_playback()
Or if you are placing the script in the player, because why not:
onready var _playback := get_stream_playback()
We can push audio frames to the playback object. With playback.push_frame, it takes a Vector2, where each component is one of the stereo channels (x = left, y = right).
We are going to call get_frames_available to figure out how many we need to push, and we are going to be pushing them every graphics frame (i.e. in _process).
The following script will generate an sine wave with frequency 440 (A):
extends AudioStreamPlayer
onready var _playback := get_stream_playback()
onready var _sample_hz:float = stream.mix_rate
var _pulse_hz := 440.0
var _phase := 0.0
func _ready():
_fill_buffer()
func _process(_delta):
_fill_buffer()
func _fill_buffer():
var increment := _pulse_hz / _sample_hz
for frame_index in int(_playback.get_frames_available()):
_playback.push_frame(Vector2.ONE * sin(_phase * TAU))
_phase = fmod(_phase + increment, 1.0)
You can set the AudioStreamPlayer to autoplay.
This code is adapted from the official Audio Generator demo.
We, of course, may want to play multiple of these notes at the same time.
So, I created a Note class that looks like this (just add a new script in the FileSystem panel):
class_name Note extends Object
var increment:float
var pulse_hz:float
var phase:float
func _init(hz:float, sample_hz):
pulse_hz = hz
phase = 0.0
increment = pulse_hz / sample_hz
func frame() -> float:
var result := sin(phase * TAU)
phase = fmod(phase + increment, 1.0)
return result
And now we can play them like this:
extends AudioStreamPlayer
onready var _playback := get_stream_playback()
onready var _sample_hz:float = stream.mix_rate
onready var notes := [
Note.new(440, _sample_hz),
Note.new(554.37, _sample_hz),
Note.new(622.25, _sample_hz)
]
func _ready():
_fill_buffer()
func _process(_delta):
_fill_buffer()
func _fill_buffer():
var note_count := notes.size()
for frame_index in int(_playback.get_frames_available()):
var frame := 0.0
for note in notes:
frame += note.frame()
_playback.push_frame(Vector2.ONE * frame / note_count)
Recording
To record we are going to add a "Record" (AudioEffectRecord) effect to an audio bus. Let us say you added it to Master (the first bus), and it is the first effect there, we can get if from code like this:
var record_bus_index := 0 # Master
var record_effect_index := 0 # First Effect
var record_effect = AudioServer.get_bus_effect(record_bus_index, record_effect_index) as AudioEffectRecord
Then we need to start recording, like this:
record_effect.set_recording_active(true)
And when we are done recording we can get what was recorded (an AudioStreamSample) and stop recording:
var recording := record_effect.get_recording()
record_effect.set_recording_active(false)
And finally we can save it to a WAV file:
recording.save_to_wav("user://recording.wav")
See also the official Audio Mic Record demo project.
No, there is no in engine solution to save an MP3.
Linky links
The official Audio Generator demo.
The official Audio Mic Record demo project.
The addon godot-simple-sampler.
The addon godot-midi-player.
I extensively use the Audio and PositionalAudio part of ThreeJS and it seems to me that the play/pause function can not work correctly for a looped audio.
In the ThreeJS Audio source we can read:
this._pausedAt += Math.max( this.context.currentTime - this._startedAt, 0 ) * this.playbackRate;
If I am understand correctly, _pausedAt is thus somehow the elapsed duration of play in the audio file which is fine and totally usable (in particular for a straightforward playing with no loop). But when executing the play() function I see:
source.start( this._startedAt, this._pausedAt + this.offset, this.duration );
And in the MDN documentation of AudioBufferSourceNode.start():
offsets past the end of the audio which will be played (based on the audio
buffer's duration and/or the loopEnd property) are silently clamped to the
maximum value allowed.
That means that once the file has been read at least once until the end (with or without pauses), the next play() (typically after a pause()) will start at the end of the file (duration and/or loopEnd as mentioned) instead of restarting from the correct position where it was paused – somewhere in between loopStart and loopEnd.
Am I correct on this understanding?
If so, I see no better option than doing something like the code below to correct the position of the playhead (or _pausedAt):
if (sound.getLoop() === true) {
var looping = (sound.loopEnd === 0) ? sound.source.buffer.duration : sound.loopEnd
var loopDur = looping - sound.loopStart
_pausedAt = (_pausedAt - sound.loopStart) % loopDur + sound.loopStart;
}
Does any of this make sense?
Thank you very much,
Benjamin
Answered on ThreeJS discourse: https://discourse.threejs.org/t/pause-for-looped-audio/16894
Fixed here: https://github.com/mrdoob/three.js/pull/19079
What is your other way of contacting Benjamin? I can not post on ThreeJS forum.
I capture a image by camera in emulator and it was saved as:
"/storage/emulated/0/Download/IMG_1582623402006.jpg"
I am trying to display this image in < ImageView > as the following:
Bitmap finalBitmap = BitmapFactory.decodeFile("/storage/emulated/0/Download/IMG_1582623402006.jpg");
holder.image.setImageBitmap(scaledBitmap);
but it shows that "finalBitmap" is empty. What have I missed?
Thanks in advance!
You posted this over a year ago, so it may be too old at this point.
It appears that you are using two variables for your bitmaps: finalBitmap and scaledBitmap and not referencing the proper one in the setImageBitmap command.
Instead, have you tried this?
holder.image.setImageBitmap(finalBitmap);
Not totally sure, but adding some attributes in "options" seems make it work in emulator:
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = true;
options.inSampleSize = 5;
Bitmap scaledBitmap = BitmapFactory.decodeFile(/storage/emulated/0/Download/IMG_1582623402006.jpg, options);
holder.image.setImageBitmap(scaledBitmap);
My problem is that Left monitor and Right monitor display very different scenes(like phicture)
Now I use VrVideoView for my 360 video player.
<com.google.vr.sdk.widgets.video.VrVideoView
android:id="#+id/video_view"
android:layout_width="match_parent"
android:scrollbars="#null"
android:layout_height="250dip"/>
full code :
enter link description here
I just change file name "congo.mp4"
Use this snippet in your code:
If you are using HLS:
options.inputFormat = Options.FORMAT_HLS;
Otherwise:
options.inputFormat = Options.FORMAT_DEFAULT;
And If you want to use mono video:
options.inputType = Options.TYPE_MONO;
Otherwise:
options.inputType = Options.TYPE_STEREO_OVER_UNDER;
options.inputType = Options.TYPE_MONO;
Followed a few tutorials and I'm having no luck, I'm trying to add a simple .WAV sound effect into XNA using free samples, I have named the sound as:
SoundEffect hit1;
SoundEffect hit2;
And then loaded the content with:
hit1= content.load<SoundEffect>("hit1")
But when it comes to adding the play to a button press and I go to test it there's no sound at all even no errors or nothing the game loads and is playable but any sound effects are not working.
** // Sounds
SoundEffect hit1;
SoundEffect hit2;
This is my variable names:
// Sounds
hit1 = Content.Load<SoundEffect>("hit1");
hit2 = Content.Load<SoundEffect>("hit2");
This is how I'm loading them in the loadcontent method:
//If Keyboard Key W is Pressed or Buttong Y
if (keys1.IsKeyDown(Keys.W) && oldKeys1.IsKeyUp(Keys.W)
|| (oldpad1.Buttons.Y == ButtonState.Released) && (pad1.Buttons.Y == ButtonState.Pressed))
{
secondsPassed = 0;
// IF The Target Image is a Gnome
if (targets[0] == sprites[0])
{
//They whacked a gnome
GnomeHits = GnomeHits + 1;
runNum = 0;
secondsPassed = 0;
hit1.Play();
}
else
{
//They whacked a troll
scoreNum = scoreNum + 1;
runNum = runNum + 1;
GnomeHits = 0;
secondsPassed = 0;
hit2.Play();
}
SetUpSpriteLoop();
And this is one of the control buttons I'm trying to assign sound too
when I hit F5 and run the game when I hit the key or button no sound at all.
I had a similar problem with Monogame (built from XNA's ashes) on a Windows 10 machine. I fixed it by reinstalling DirectX, and everything just worked.
My first recommendation would be to make sure you are updating the keyboard presses. I only say that as I can't see you doing that in your code. If you do not do this then the key states will not update and therefore you will never enter the if statement.
Once you have ensured you are entering the if statement, I would play around with the volume of the sound effect to make sure it is actually audible. I may be a bit off base there however if I am I suggest following this tutorial rbwhitaker is a great resource for XNA.