I have some mp3 audios in some scenes, and I have 1 button in my main scene and I want when user press these button, my App had no mp3 sounds. Do I Have to duplicate my App without sounds? or how?
I have this but not works?
public class AudioApp : MonoBehaviour {
public void SonidoApp() {
PlayerPrefs.SetInt("MP3AudioState", 1);
}
public void SonidoAppmute() {
PlayerPrefs.SetInt("MP3AudioState", 0);
}
}
What you can do is when the user press the button you save a value in PlayerPrefs (0 for unmuted 1 for muted as an exemple):
PlayerPrefs.SetInt("MP3AudioState", 1);
And wherever you play the audio sound all you need to check is the value you saved:
if(PlayerPrefs.GetInt("MP3AudioState") == 0){ //Play the sound}
So if the value you saved was 0 at the time the sound wants to be played you let it pass if it's 1 you don't.
To disable all Audios in your scene, simply change the volume of the current AudioListener to zero.
The AudioListener is attached to your camera by default, so this could work for you:
Camera.main.GetComponent<AudioListener>.volume = 0;
Related
I have two 3d buttons in my scene and when I gaze into any of the buttons it will invoke OnPointerEnter callback and saving the object the pointer gazed to.
Upon pressing Fire1 on the Gamepad I apply materials taken from Resources folder.
My problem started when I gazed into the second button, and pressing Fire1 button will awkwardly changed both buttons at the same time.
This is the script I attached to both of the buttons
using UnityEngine;
using UnityEngine.EventSystems;
using Vuforia;
using System.Collections;
public class TriggerMethods : MonoBehaviour, IPointerEnterHandler, IPointerExitHandler
{
Material _mat;
GameObject targetObject;
Renderer rend;
int i = 0;
// Update is called once per frame
void Update () {
if (Input.GetButtonDown("Fire1"))
TukarMat();
}
public void OnPointerEnter(PointerEventData eventData)
{
targetObject = ExecuteEvents.GetEventHandler<IPointerEnterHandler>(eventData.pointerEnter);
}
public void OnPointerExit(PointerEventData eventData)
{
targetObject = null;
}
public void TukarMat()
{
Debug.Log("Value i = " + i);
if (i == 0)
{
ApplyTexture(i);
i++;
}
else if (i == 1)
{
ApplyTexture(i);
i++;
}
else if (i == 2)
{
ApplyTexture(i);
i = 0;
}
}
void ApplyTexture(int i)
{
rend = targetObject.GetComponent<Renderer>();
rend.enabled = true;
switch (i)
{
case 0:
_mat = Resources.Load("Balut", typeof(Material)) as Material;
rend.sharedMaterial = _mat;
break;
case 1:
_mat = Resources.Load("Khasiat", typeof(Material)) as Material;
rend.sharedMaterial = _mat;
break;
case 2:
_mat = Resources.Load("Alma", typeof(Material)) as Material;
rend.sharedMaterial = _mat;
break;
default:
break;
}
}
I sensed some logic error and tried making another class to only manage object the pointer gazed to but I was getting more confused.
Hope getting some helps
Thank you
TukarMat() is beeing called on both buttons when you press Fire1. If targetObject is really becoming null this should give an error on first button since it's trying to get component from a null object. Else, it'll change both as you said. Make sure OnPointerExit is beeing called.
Also, it seems you are changing the shared material.
The documentation suggests:
Modifying sharedMaterial will change the appearance of all objects using this material, and change material settings that are stored in the project too.
It is not recommended to modify materials returned by sharedMaterial. If you want to modify the material of a renderer use material instead.
So, try changing the material property instead of sharedMaterial since it'll change the material for that object only.
I have script (#2) below with a public AudioClip variable. When I 'addComponent', it loses that reference.
I've tested manually adding it to an object in the editor, and in that case it works OK. Why does my script added during runtime lose the reference?
GameObject 1: has this script (#1) attached
void HitByRay() {
clock = GameObject.FindGameObjectWithTag("Clock_Arrow").GetComponent<Clock>();
clock.gameObject.AddComponent<Tick_Audio_Script>();
}
Which attaches the following script (#2) to the 'Clock' object.
public int safetyCounter;
float gapToNext;
public AudioClip tickAudio;
// Use this for initialization
void Start () {
startTicker(100);
}
void startTicker(int maxTicks)
{
safetyCounter = maxTicks;
gapToNext = 1f;
playTick();
}
void playTick()
{
Debug.Log("Tick");
if (gapToNext < 0.1 || safetyCounter == 0)
{
Debug.Log("We're done...!");
return;
}
// **ERROR HERE CLIP NOT FOUND**
AudioSource.PlayClipAtPoint(tickAudio, gameObject.transform.position, 1f);
gapToNext = gapToNext * 0.97f;
safetyCounter--;
Invoke("playTick",gapToNext);
}
Here's the script in the editor, where I've assigned the audioclip.
But when it's attached via 'AddComponent', the reference to that clip does not come through (after I hit play and 'hit' my trigger object which attaches this script)? This results in a null reference error as there is no clip found to be played.
My AudioListener (located on a different object) is working, as there are other sounds being played correctly in the scene.
Again, I've tested adding this script manually to any object pre-run in the editor it works. Why is this?
This has a simple solution. You can do one of the following:
1) Create a prefab and add it to "clock" with the needed references.
2) do this:
void HitByRay() {
clock = GameObject.FindGameObjectWithTag("Clock_Arrow").GetComponent<Clock>();
clock.gameObject.AddComponent<Tick_Audio_Script>();
//NEW PART
clock.gameObject.GetComponent<Tick_Audio_Script>().TickAudio = (desired Audio);
}
Hope this helps.
I am trying to play a sound when a particle collides with a wall. Right now, it just plays the sound from the parent object, which is the player.
However, I want the sound to play from the particle. Which means when a particle is far to the left, you vaguely hear the sound coming from the left.
Is there a way to play the sound from a particle, when it collides?
You can use OnParticleCollision and the ParticlePhysicsExtensions, and play a sound with PlayClipAtPoint:
using UnityEngine;
using System.Collections;
[RequireComponent(typeof(ParticleSystem))]
public class CollidingParticles : MonoBehaviour {
public AudioClip collisionSFX;
ParticleSystem partSystem;
ParticleCollisionEvent[] collisionEvents;
void Awake () {
partSystem = GetComponent<ParticleSystem>();
collisionEvents = new ParticleCollisionEvent[16];
}
void OnParticleCollision (GameObject other) {
int safeLength = partSystem.GetSafeCollisionEventSize();
if (collisionEvents.Length < safeLength)
collisionEvents = new ParticleCollisionEvent[safeLength];
int totalCollisions = partSystem.GetCollisionEvents(other, collisionEvents);
for (int i = 0; i < totalCollisions; i++)
AudioSource.PlayClipAtPoint(collisionSFX, collisionEvents[i].intersection);
print (totalCollisions);
}
}
The problem is that the temporary AudioSource created by PlayClipAtPoint cannot be retrieved (to set it as 3D sound). So you are better off creating your own PlayClipAtPoint method that instantiates a prefab, already configured with a 3D AudioSource and the clip you want, and run Destroy(instance, seconds) to mark it for timed destruction.
The only way I can imagine is overriding animation of particle system via GetParticles/SetParticles. Thus you can provide your own collision detection for particles with Physics.RaycastAll and play sound when collisions occured.
AudioSource audioSourcee;
public float timerToPlay;
float timerToSave;
void Start()
{
timerToSave = timerToPlay;
}
void OnEnable()
{
timerToPlay = timerToSave;
}
// Update is called once per frame
void Update()
{
if(timerToPlay>0)
timerToPlay -= Time.deltaTime;
if(timerToPlay<=0)
audioSourcee.Play();
}
I am trying to create an app that would allow the user some sounds and then use this in a playback fashion.
I would like to have my application play a .wav file that the user will record.
I am having trouble figuring out how to code this, as I keep getting a error.
==== JavaSound Minim Error ====
==== Error invoking createInput on the file loader object: null
Snippet of code:
import ddf.minim.*;
AudioInput in;
AudioRecorder recorder;
RadioButtons r;
boolean showGUI = false;
color bgCol = color(0);
Minim minim;
//Recording players
AudioPlayer player1;
AudioPlayer player2;
void newFile()
{
countname =(name+1);
recorder = minim.createRecorder(in, "data/" + countname + ".wav", true);
}
......
void setup(){
minim = new Minim(this);
in = minim.getLineIn(Minim.MONO, 2048);
newFile();
player1 = minim.loadFile("data/" + countname + ".wav");// recording #1
player2 = minim.loadFile("data/" + countname + ".wav");//recording #2
void draw() {
// Draw the image to the screen at coordinate (0,0)
image(img,0,0);
//recording button
if(r.get() == 0)
{
for(int i = 0; i < in.left.size()-1; i++)
}
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
}
else
{
text("Not recording.", 5, 15);
}
}
//play button
if(r.get() == 1)
{
if(mousePressed){
.......
player_1.cue(0);
player_1.play();
}
if(mousePressed){
.......
player_2.cue(0);
player_2.play();
}
}
The place where I have a problem is here:
player1 = minim.loadFile("data/" + countname + ".wav");// recording #1
player2 = minim.loadFile("data/" + countname + ".wav");//recording #2
The files that will be recorded will be 1.wav, 2.wav. But I can not place this in the
player1.minim.loadFile ("1.wav");
player2.mminim.loadFile("2.wav");
How would I do this?
As indicated in the JavaDoc page for AudioRecorder [1], calls to beginRecord(), endRecord() and save() will need to happen so that whatever you want to record is actually recorded and then also saved to disk. As long as that does not happen there is nothing for loadFile() to load and you will therefore receive errors. So the problem lies in your program flow. Only when your program reaches a state where a file has already been recorded and saved, you can actually load that.
There are probably also ways for you to play back whatever is being recorded right at the moment it arrives in your audio input buffer (one would usually refer to such as 'monitoring'), but as i understand it, that is not what you want.
Aside this general conceptual flaw there also seem to be other problems in your code, e.g. countname is not being iterated between two subsequent loadFile calls (I assume that it should be iterated though); Also at some point you have "player_1.play();" (note the underscore), although you're probably refering to this, differently written variable earlier initialized with "player1 = minim.loadFile(...)" ? ...
[1] http://code.compartmental.net/minim/javadoc/ddf/minim/AudioRecorder.html
This is the approach to record from an audio file into an AudioRecorder object. You load a file, play it and then you choose what section to save into another file that you can play using and AudioPlayer object or your favorite sound player offered by your OS.
Related to
I am having trouble figuring out how to code this, as I keep getting a
error.
Despite it says it is an error, it doesn't affect executing your program. I would consider this a warning and ignore it. If you want to fix it, I believe you will need to edit the file's tags to properly set their values.
INSTRUCTIONS: In the code, define your file to play. When you run the sketch, press r to begin recording, r again to stop recording. Don't forget to press s to save the file to an audio file which will be located in the data folder.
NOTE: If you need to play wav files, you will need a Sampler object instead of a FilePlayer one.
//REFERENCE: https:// forum.processing.org/one/topic/how-can-i-detect-sound-with-my-mic-in-my-computer.html
//REFERENCE: https:// forum.processing.org/two/discussion/21842/is-it-possible-to-perform-fft-with-fileplayer-object-minim
/**
* This sketch demonstrates how to use an <code>AudioRecorder</code> to record audio to disk.
* Press 'r' to toggle recording on and off and the press 's' to save to disk.
* The recorded file will be placed in the sketch folder of the sketch.
* <p>
* For more information about Minim and additional features,
* visit http://code.compartmental.net/minim/" target="_blank" rel="nofollow">http://code.compartmental.net/minim/</a>;
*/
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.analysis.*;
Minim minim;
FilePlayer player;
AudioOutput out;
AudioRecorder recorder;
void setup()
{
size(512, 200, P3D);
textFont(createFont("Arial", 12));
minim = new Minim(this);
player = new FilePlayer(minim.loadFileStream("energeticDJ.mp3"));
// IT DOESN'T WORK FOR WAV files ====> player = new FilePlayer(minim.loadFileStream("fair1939.wav"));
out = minim.getLineOut();
TickRate rateControl = new TickRate(1.f);
player.patch(rateControl).patch(out);
recorder = minim.createRecorder(out, dataPath("myrecording.wav"),true);
player.loop(0);
}
void draw()
{
background(0);
stroke(255);
// draw a line to show where in the song playback is currently located
float posx = map(player.position(), 0, player.length(), 0, width);
stroke(0, 200, 0);
line(posx, 0, posx, height);
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
} else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data, you must call
// beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
// as many times as you like, the audio data will be appended to the end of the buffer
// (in the case of buffered recording) or to the end of the file (in the case of streamed recording).
if ( recorder.isRecording() )
{
recorder.endRecord();
} else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to the file we specified in createRecorder
// in the case of buffered recording, if the buffer is large,
// this will appear to freeze the sketch for sometime
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// the method returns the recorded audio as an AudioRecording,
// see the example AudioRecorder >> RecordAndPlayback for more about that
recorder.save();
println("Done saving.");
}
}
In fact it is a wild mix of technologies, but the answer to my question (I think) is closest to Direct3D 9. I am hooking to an arbitrary D3D9 applications, in most cases it is a game, and injecting my own code to mofify the behavior of the EndScene function. The backbuffer is copied into a surface which is set to point to a bitmap in a push source DirectShow filter. The filter samples the bitmaps at 25 fps and streams the video into an .avi file. There is a text overlay shown across the game's screnn telling the user about a hot key combination that is supposed to stop gameplay capture, but this overlay is not supposed to show up in the recoreded video. Everything works fast and beautiful except for one annoying fact. On a random occasion, a frame with the text overaly makes its way into the recoreded video. This is not a really desired artefact, the end user only wants to see his gameplay in the video and nothing else. I'd love to hear if anyone can share ideas of why this is happening. Here is the source code for the EndScene hook:
using System;
using SlimDX;
using SlimDX.Direct3D9;
using System.Diagnostics;
using DirectShowLib;
using System.Runtime.InteropServices;
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
[System.Security.SuppressUnmanagedCodeSecurity]
[Guid("EA2829B9-F644-4341-B3CF-82FF92FD7C20")]
public interface IScene
{
unsafe int PassMemoryPtr(void* ptr, bool noheaders);
int SetBITMAPINFO([MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)]byte[] ptr, bool noheaders);
}
public class Class1
{
object _lockRenderTarget = new object();
public string StatusMess { get; set; }
Surface _renderTarget;
//points to image bytes
unsafe void* bytesptr;
//used to store headers AND image bytes
byte[] bytes;
IFilterGraph2 ifg2;
ICaptureGraphBuilder2 icgb2;
IBaseFilter push;
IBaseFilter compressor;
IScene scene;
IBaseFilter mux;
IFileSinkFilter sink;
IMediaControl media;
bool NeedRunGraphInit = true;
bool NeedRunGraphClean = true;
DataStream s;
DataRectangle dr;
unsafe int EndSceneHook(IntPtr devicePtr)
{
int hr;
using (Device device = Device.FromPointer(devicePtr))
{
try
{
lock (_lockRenderTarget)
{
bool TimeToGrabFrame = false;
//....
//logic based on elapsed milliseconds deciding if it is time to grab another frame
if (TimeToGrabFrame)
{
//First ensure we have a Surface to render target data into
//called only once
if (_renderTarget == null)
{
//Create offscreen surface to use as copy of render target data
using (SwapChain sc = device.GetSwapChain(0))
{
//Att: created in system memory, not in video memory
_renderTarget = Surface.CreateOffscreenPlain(device, sc.PresentParameters.BackBufferWidth, sc.PresentParameters.BackBufferHeight, sc.PresentParameters.BackBufferFormat, Pool.SystemMemory);
} //end using
} // end if
using (Surface backBuffer = device.GetBackBuffer(0, 0))
{
//The following line is where main action takes place:
//Direct3D 9 back buffer gets copied to Surface _renderTarget,
//which has been connected by references to DirectShow's
//bitmap capture filter
//Inside the filter ( code not shown in this listing) the bitmap is periodically
//scanned to create a streaming video.
device.GetRenderTargetData(backBuffer, _renderTarget);
if (NeedRunGraphInit) //ran only once
{
ifg2 = (IFilterGraph2)new FilterGraph();
icgb2 = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
icgb2.SetFiltergraph(ifg2);
push = (IBaseFilter) new PushSourceFilter();
scene = (IScene)push;
//this way we get bitmapfile and bitmapinfo headers
//ToStream is slow, but run it only once to get the headers
s = Surface.ToStream(_renderTarget, ImageFileFormat.Bmp);
bytes = new byte[s.Length];
s.Read(bytes, 0, (int)s.Length);
hr = scene.SetBITMAPINFO(bytes, false);
//we just supplied the header to the PushSource
//filter. Let's pass reference to
//just image bytes from LockRectangle
dr = _renderTarget.LockRectangle(LockFlags.None);
s = dr.Data;
Result r = _renderTarget.UnlockRectangle();
bytesptr = s.DataPointer.ToPointer();
hr = scene.PassMemoryPtr(bytesptr, true);
//continue building graph
ifg2.AddFilter(push, "MyPushSource");
icgb2.SetOutputFileName(MediaSubType.Avi, "C:\foo.avi", out mux, out sink);
icgb2.RenderStream(null, null, push, null, mux);
media = (IMediaControl)ifg2;
media.Run();
NeedRunGraphInit = false;
NeedRunGraphClean = true;
StatusMess = "now capturing, press shift-F11 to stop";
} //end if
} // end using backbuffer
} // end if Time to grab frame
} //end lock
} // end try
//It is usually thrown when the user makes game window inactive
//or it is thrown deliberately when time is up, or the user pressed F11 and
//it resulted in stopping a capture.
//If it is thrown for another reason, it is still a good
//idea to stop recording and free the graph
catch (Exception ex)
{
//..
//stop the DirectShow graph and cleanup
} // end catch
//draw overlay
using (SlimDX.Direct3D9.Font font = new SlimDX.Direct3D9.Font(device, new System.Drawing.Font("Times New Roman", 26.0f, FontStyle.Bold)))
{
font.DrawString(null, StatusMess, 20, 100, System.Drawing.Color.FromArgb(255, 255, 255, 255));
}
return device.EndScene().Code;
} // end using device
} //end EndSceneHook
As it happens sometimes, I finally found an answer to this question myself, if anyone is interested. It turned out that backbuffer in some Direct3D9 apps is not necessarily refreshed each time the hooked EndScene is called. Hence, occasionally the backbuffer with the text overlay from the previous EndScene hook call was passed to the DirectShow source filter responsible for collecting input frames. I started stamping each frame with a tiny 3 pixel overlay with known RGB values and checking if this dummy overlay was still present before passing the frame to the DirectShow filter. If the overlay was there, the previously cached frame was passed instead of the current one. This approach effectively removed the text overlay from the video recorded in the DirectShow graph.