Well, I would like to implement a function, such when the application starts, the recorder will start to recording, and when the user keeps silence there is nothing going to happen until the user speaks. Then, it will save the PCM file of user's voice and then stop recording.
Voice Detection in Android Application
Above is the question I have found similar as mine, but the answer of this link can not work. And I don't know how to modify it, since I don't understand the concept of the code.
Please help me~
Well, I solved my problem, here is my solution.
I modified the code came from this url:
Voice Detection in Android Application
private static final String TAG = "MainActivity";
private static int RECORDER_SAMPLERATE = 44100;
private static int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_STEREO;
private static int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private Button btn, btn_convert, btn_play;
private TextView txv;
boolean isRecording = false;
private File file;
private AudioRecord audioRecord;
int bufferSizeInBytes = 0;
Context context = MainActivity.this;
// path
final String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/final.pcm" ;
final String outpath = path.replace(".pcm", ".wav");
public void autoRecording(){
// Get the minimum buffer size required for the successful creation of an AudioRecord object.
bufferSizeInBytes = AudioRecord.getMinBufferSize( RECORDER_SAMPLERATE,
RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING
);
// Initialize Audio Recorder.
AudioRecord audioRecorder = new AudioRecord( MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE,
RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING,
bufferSizeInBytes
);
// Start Recording.
txv.setText("Ing");
audioRecorder.startRecording();
isRecording = true;
// for auto stop
int numberOfReadBytes = 0;
byte audioBuffer[] = new byte[bufferSizeInBytes];
boolean recording = false;
float tempFloatBuffer[] = new float[3];
int tempIndex = 0;
// create file
file = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/final.pcm");
Log.d(TAG, "recording: file path:" + file.toString());
if (file.exists()){
Log.d(TAG,"file exist, delete file");
file.delete();
}
try {
Log.d(TAG,"file created");
file.createNewFile();
} catch (IOException e) {
Log.d(TAG,"didn't create the file:" + e.getMessage());
throw new IllegalStateException("did not create file:" + file.toString());
}
// initiate media scan and put the new things into the path array to
// make the scanner aware of the location and the files you want to see
MediaScannerConnection.scanFile(context, new String[] {file.toString()}, null, null);
// output stream
OutputStream os = null;
DataOutputStream dos = null;
try {
os = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream(os);
dos = new DataOutputStream(bos);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
// While data come from microphone.
while( true )
{
float totalAbsValue = 0.0f;
short sample = 0;
numberOfReadBytes = audioRecorder.read( audioBuffer, 0, bufferSizeInBytes );
// Analyze Sound.
for( int i=0; i<bufferSizeInBytes; i+=2 )
{
sample = (short)( (audioBuffer[i]) | audioBuffer[i + 1] << 8 );
totalAbsValue += (float)Math.abs( sample ) / ((float)numberOfReadBytes/(float)2);
}
// read in file
for (int i = 0; i < numberOfReadBytes; i++) {
try {
dos.writeByte(audioBuffer[i]);
} catch (IOException e) {
e.printStackTrace();
}
}
// Analyze temp buffer.
tempFloatBuffer[tempIndex%3] = totalAbsValue;
float temp = 0.0f;
for( int i=0; i<3; ++i )
temp += tempFloatBuffer[i];
if( (temp >=0 && temp <= 2100) && recording == false ) // the best number for close to device: 3000
{ // the best number for a little bit distance : 2100
Log.i("TAG", "1");
tempIndex++;
continue;
}
if( temp > 2100 && recording == false )
{
Log.i("TAG", "2");
recording = true;
}
if( (temp >= 0 && temp <= 2100) && recording == true )
{
Log.i("TAG", "final run");
//isRecording = false;
txv.setText("Stop Record.");
//*/
tempIndex++;
audioRecorder.stop();
try {
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
break;
}
}
}
The function of this function:
if you call this function, the recorder will start recording, and once you make sound(Notify if there are some noise it will stop too.) it will stop recording and save into file(pcm format).
I followed this NAudio Demo modified to play ShoutCast.
In my full code I have to resample the incoming audio and stream it again over the network to a network player. Since I get many "clicks and pops", I came back to the demo code and I found that these artifacts are originated after the decoding block.
If I save the incoming stream in mp3 format, it is pretty clear.
When I save the raw decoded data (without other processing than the decoder) I get many audio artifacts.
I wonder whether I am doing some error, even if my code is almost equal to the NAudio demo.
Here the function from the example as modified by me to save the raw data. It is called as a new Thread.
private void StreamMP3(object state)
{
//Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
//SettingsSection section = (SettingsSection)config.GetSection("system.net/settings");
this.fullyDownloaded = false;
string url = "http://icestreaming.rai.it/5.mp3";//(string)state;
webRequest = (HttpWebRequest)WebRequest.Create(url);
int metaInt = 0; // blocksize of mp3 data
int framesize = 0;
webRequest.Headers.Clear();
webRequest.Headers.Add("GET", "/ HTTP/1.0");
// needed to receive metadata informations
webRequest.Headers.Add("Icy-MetaData", "1");
webRequest.UserAgent = "WinampMPEG/5.09";
HttpWebResponse resp = null;
try
{
resp = (HttpWebResponse)webRequest.GetResponse();
}
catch (WebException e)
{
if (e.Status != WebExceptionStatus.RequestCanceled)
{
ShowError(e.Message);
}
return;
}
byte[] buffer = new byte[16384 * 4]; // needs to be big enough to hold a decompressed frame
try
{
// read blocksize to find metadata block
metaInt = Convert.ToInt32(resp.GetResponseHeader("icy-metaint"));
}
catch
{
}
IMp3FrameDecompressor decompressor = null;
byteOut = createNewFile(destPath, "salva", "raw");
try
{
using (var responseStream = resp.GetResponseStream())
{
var readFullyStream = new ReadFullyStream(responseStream);
readFullyStream.metaInt = metaInt;
do
{
if (mybufferedWaveProvider != null && mybufferedWaveProvider.BufferLength - mybufferedWaveProvider.BufferedBytes < mybufferedWaveProvider.WaveFormat.AverageBytesPerSecond / 4)
{
Debug.WriteLine("Buffer getting full, taking a break");
Thread.Sleep(500);
}
else
{
Mp3Frame frame = null;
try
{
frame = Mp3Frame.LoadFromStream(readFullyStream, true);
if (metaInt > 0)
UpdateSongName(readFullyStream.SongName);
else
UpdateSongName("No Song Info in Stream...");
}
catch (EndOfStreamException)
{
this.fullyDownloaded = true;
// reached the end of the MP3 file / stream
break;
}
catch (WebException)
{
// probably we have aborted download from the GUI thread
break;
}
if (decompressor == null)
{
// don't think these details matter too much - just help ACM select the right codec
// however, the buffered provider doesn't know what sample rate it is working at
// until we have a frame
WaveFormat waveFormat = new Mp3WaveFormat(frame.SampleRate, frame.ChannelMode == ChannelMode.Mono ? 1 : 2, frame.FrameLength, frame.BitRate);
decompressor = new AcmMp3FrameDecompressor(waveFormat);
this.mybufferedWaveProvider = new BufferedWaveProvider(decompressor.OutputFormat);
this.mybufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(200); // allow us to get well ahead of ourselves
framesize = (decompressor.OutputFormat.Channels * decompressor.OutputFormat.SampleRate * (decompressor.OutputFormat.BitsPerSample / 8) * 20) / 1000;
//this.bufferedWaveProvider.BufferedDuration = 250;
}
int decompressed = decompressor.DecompressFrame(frame, buffer, 0);
//Debug.WriteLine(String.Format("Decompressed a frame {0}", decompressed));
mybufferedWaveProvider.AddSamples(buffer, 0, decompressed);
while (mybufferedWaveProvider.BufferedDuration.Milliseconds >= 20)
{
byte[] read = new byte[framesize];
mybufferedWaveProvider.Read(read, 0, framesize);
byteOut.Write(read, 0, framesize);
}
}
} while (playbackState != StreamingPlaybackState.Stopped);
Debug.WriteLine("Exiting");
// was doing this in a finally block, but for some reason
// we are hanging on response stream .Dispose so never get there
decompressor.Dispose();
}
}
finally
{
if (decompressor != null)
{
decompressor.Dispose();
}
}
}
OK i found the problem. I included the shoutcast metadata to the MP3Frame.
See the comment "HERE I COLLECT THE BYTES OF THE MP3 FRAME" to locate the correct point to get the MP3 frame with no streaming metadata.
The following code runs without audio artifacts:
private void SHOUTcastReceiverThread()
{
//-*- String server = "http://216.235.80.18:8285/stream";
//String serverPath = "/";
//String destPath = "C:\\temp\\"; // destination path for saved songs
HttpWebRequest request = null; // web request
HttpWebResponse response = null; // web response
int metaInt = 0; // blocksize of mp3 data
int count = 0; // byte counter
int metadataLength = 0; // length of metadata header
string metadataHeader = ""; // metadata header that contains the actual songtitle
string oldMetadataHeader = null; // previous metadata header, to compare with new header and find next song
//CircularQueueStream framestream = new CircularQueueStream(2048);
QueueStream framestream = new QueueStream();
framestream.Position = 0;
bool bNewSong = false;
byte[] buffer = new byte[512]; // receive buffer
byte[] dec_buffer = new byte[decSIZE];
Mp3Frame frame;
IMp3FrameDecompressor decompressor = null;
Stream socketStream = null; // input stream on the web request
// create web request
request = (HttpWebRequest)WebRequest.Create(server);
// clear old request header and build own header to receive ICY-metadata
request.Headers.Clear();
request.Headers.Add("GET", serverPath + " HTTP/1.0");
request.Headers.Add("Icy-MetaData", "1"); // needed to receive metadata informations
request.UserAgent = "WinampMPEG/5.09";
// execute request
try
{
response = (HttpWebResponse)request.GetResponse();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return;
}
// read blocksize to find metadata header
metaInt = Convert.ToInt32(response.GetResponseHeader("icy-metaint"));
try
{
// open stream on response
socketStream = response.GetResponseStream();
var readFullyStream = new ReadFullyStream(socketStream);
frame = null;
// rip stream in an endless loop
do
{
if (IsBufferNearlyFull)
{
Debug.WriteLine("Buffer getting full, taking a break");
Thread.Sleep(500);
frame = null;
}
else
{
int bufLen = readFullyStream.Read(buffer, 0, buffer.Length);
try
{
if (framestream.CanRead && framestream.Length > 512)
frame = Mp3Frame.LoadFromStream(framestream);
else
frame = null;
}
catch (Exception ex)
{
frame = null;
}
if (bufLen < 0)
{
Debug.WriteLine("Buffer error 1: exit.");
return;
}
// processing RAW data
for (int i = 0; i < bufLen; i++)
{
// if there is a header, the 'headerLength' would be set to a value != 0. Then we save the header to a string
if (metadataLength != 0)
{
metadataHeader += Convert.ToChar(buffer[i]);
metadataLength--;
if (metadataLength == 0) // all metadata informations were written to the 'metadataHeader' string
{
string fileName = "";
string fileNameRaw = "";
// if songtitle changes, create a new file
if (!metadataHeader.Equals(oldMetadataHeader))
{
// flush and close old byteOut stream
if (byteOut != null)
{
byteOut.Flush();
byteOut.Close();
byteOut = null;
}
if (byteOutRaw != null)
{
byteOutRaw.Flush();
byteOutRaw.Close();
byteOutRaw = null;
}
timeStart = timeEnd;
// extract songtitle from metadata header. Trim was needed, because some stations don't trim the songtitle
//fileName = Regex.Match(metadataHeader, "(StreamTitle=')(.*)(';StreamUrl)").Groups[2].Value.Trim();
fileName = Regex.Match(metadataHeader, "(StreamTitle=')(.*)(';)").Groups[2].Value.Trim();
// write new songtitle to console for information
if (fileName.Length == 0)
fileName = "shoutcast_test";
fileNameRaw = fileName + "_raw";
framestream.reSetPosition();
SongChanged(this, metadataHeader);
bNewSong = true;
// create new file with the songtitle from header and set a stream on this file
timeEnd = DateTime.Now;
if (bWrite_to_file)
{
byteOut = createNewFile(destPath, fileName, "mp3");
byteOutRaw = createNewFile(destPath, fileNameRaw, "raw");
}
timediff = timeEnd - timeStart;
// save new header to 'oldMetadataHeader' string, to compare if there's a new song starting
oldMetadataHeader = metadataHeader;
}
metadataHeader = "";
}
}
else // write mp3 data to file or extract metadata headerlength
{
if (count++ < metaInt) // write bytes to filestream
{
//HERE I COLLECT THE BYTES OF THE MP3 FRAME
framestream.Write(buffer, i, 1);
}
else // get headerlength from lengthbyte and multiply by 16 to get correct headerlength
{
metadataLength = Convert.ToInt32(buffer[i]) * 16;
count = 0;
}
}
}//for
if (bNewSong)
{
decompressor = createDecompressor(frame);
bNewSong = false;
}
if (frame != null && decompressor != null)
{
framedec(decompressor, frame);
}
// fine Processing dati RAW
}//Buffer is not full
SHOUTcastStatusProcess();
} while (playbackState != StreamingPlaybackState.Stopped);
} //try
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
if (byteOut != null)
byteOut.Close();
if (socketStream != null)
socketStream.Close();
if (decompressor != null)
{
decompressor.Dispose();
decompressor = null;
}
if (null != request)
request.Abort();
if (null != framestream)
framestream.Dispose();
if (null != bufferedWaveProvider)
bufferedWaveProvider.ClearBuffer();
//if (null != bufferedWaveProviderOut)
// bufferedWaveProviderOut.ClearBuffer();
if (null != mono16bitFsinStream)
{
mono16bitFsinStream.Close();
mono16bitFsinStream.Dispose();
}
if (null != middleStream2)
{
middleStream2.Close();
middleStream2.Dispose();
}
if (null != resampler)
resampler.Dispose();
}
}
public class QueueStream : MemoryStream
{
long ReadPosition = 0;
long WritePosition = 0;
public QueueStream() : base() { }
public override int Read(byte[] buffer, int offset, int count)
{
Position = ReadPosition;
var temp = base.Read(buffer, offset, count);
ReadPosition = Position;
return temp;
}
public override void Write(byte[] buffer, int offset, int count)
{
Position = WritePosition;
base.Write(buffer, offset, count);
WritePosition = Position;
}
public void reSetPosition()
{
WritePosition = 0;
ReadPosition = 0;
Position = 0;
}
}
private void framedec(IMp3FrameDecompressor decompressor, Mp3Frame frame)
{
int Ndecoded_samples = 0;
byte[] dec_buffer = new byte[decSIZE];
Ndecoded_samples = decompressor.DecompressFrame(frame, dec_buffer, 0);
bufferedWaveProvider.AddSamples(dec_buffer, 0, Ndecoded_samples);
NBufferedSamples += Ndecoded_samples;
brcnt_in.incSamples(Ndecoded_samples);
if (Ndecoded_samples > decSIZE)
{
Debug.WriteLine(String.Format("Too many samples {0}", Ndecoded_samples));
}
if (byteOut != null)
byteOut.Write(frame.RawData, 0, frame.RawData.Length);
if (byteOutRaw != null) // as long as we don't have a songtitle, we don't open a new file and don't write any bytes
byteOutRaw.Write(dec_buffer, 0, Ndecoded_samples);
frame = null;
}
private IMp3FrameDecompressor createDecompressor(Mp3Frame frame)
{
IMp3FrameDecompressor dec = null;
if (frame != null)
{
// don't think these details matter too much - just help ACM select the right codec
// however, the buffered provider doesn't know what sample rate it is working at
// until we have a frame
WaveFormat srcwaveFormat = new Mp3WaveFormat(frame.SampleRate, frame.ChannelMode == ChannelMode.Mono ? 1 : 2, frame.FrameLength, frame.BitRate);
dec = new AcmMp3FrameDecompressor(srcwaveFormat);
bufferedWaveProvider = new BufferedWaveProvider(dec.OutputFormat);// decompressor.OutputFormat
bufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(400); // allow us to get well ahead of ourselves
// ------------------------------------------------
//Create an intermediate format with same sampling rate, 16 bit, mono
middlewavformat = new WaveFormat(dec.OutputFormat.SampleRate, 16, 1);
outwavFormat = new WaveFormat(Fs_out, 16, 1);
// wave16ToFloat = new Wave16ToFloatProvider(provider); // I have tried with and without this converter.
wpws = new WaveProviderToWaveStream(bufferedWaveProvider);
//Check middlewavformat.Encoding == WaveFormatEncoding.Pcm;
mono16bitFsinStream = new WaveFormatConversionStream(middlewavformat, wpws);
middleStream2 = new BlockAlignReductionStream(mono16bitFsinStream);
resampler = new MediaFoundationResampler(middleStream2, outwavFormat);
}
return dec;
}
I have the following code
foreach (Items it in sortedList)
{
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.CreateNoWindow = false;
startInfo.UseShellExecute = true;
startInfo.FileName = it.filePath;
startInfo.WindowStyle = ProcessWindowStyle.Hidden;
try
{
Process p = new Process();
p.StartInfo = startInfo;
p.Start();
p.WaitForExit();
}
catch(Exception ew)
{
// Log error.
}
}
Every time I open a mp4 file I get an
InvalidOperationException
and it says that there is no program linked to this format, but at the same time the windows media player starts up and shows the file.
Can someone explain to me why this code is throwing the InvalidOperationException and also how I can fix the issue?
Thanks.
I am executing large query,so my app throwing time out error. Some of the thread suggested to added command time out but after adding those lines it take longer to get response back, any idea why or what am i missing in my code?
public int CreateRecord(string theCommand, DataSet theInputData)
{
int functionReturnValue = 0;
int retVal = 0;
SqlParameter objSqlParameter = default(SqlParameter);
DataSet dsParameter = new DataSet();
int i = 0;
try
{
//Set the command text (stored procedure name or SQL statement).
mobj_SqlCommand.CommandTimeout = 120;
mobj_SqlCommand.CommandText = theCommand;
mobj_SqlCommand.CommandType = CommandType.StoredProcedure;
for (i = 0; i <= (theInputData.Tables.Count - 1); i++)
{
if (theInputData.Tables[i].Rows.Count > 0)
{
dsParameter.Tables.Add(theInputData.Tables[i].Copy());
}
}
objSqlParameter = new SqlParameter("#theXmlData", SqlDbType.Text);
objSqlParameter.Direction = ParameterDirection.Input;
objSqlParameter.Value = "<?xml version=\"1.0\" encoding=\"iso-8859-1\"?>" + dsParameter.GetXml();
//Attach to the parameter to mobj_SqlCommand.
mobj_SqlCommand.Parameters.Add(objSqlParameter);
//Finally, execute the command.
retVal = (int)mobj_SqlCommand.ExecuteScalar();
//Detach the parameters from mobj_SqlCommand, so it can be used again.
mobj_SqlCommand.Parameters.Clear();
functionReturnValue = retVal;
}
catch (Exception ex)
{
throw new System.Exception(ex.Message);
}
finally
{
//Clean up the objects created in this object.
if (mobj_SqlConnection.State == ConnectionState.Open)
{
mobj_SqlConnection.Close();
mobj_SqlConnection.Dispose();
mobj_SqlConnection = null;
}
if ((mobj_SqlCommand != null))
{
mobj_SqlCommand.Dispose();
mobj_SqlCommand = null;
}
if ((mobj_SqlDataAdapter != null))
{
mobj_SqlDataAdapter.Dispose();
mobj_SqlDataAdapter = null;
}
if ((dsParameter != null))
{
dsParameter.Dispose();
dsParameter = null;
}
objSqlParameter = null;
}
return functionReturnValue;
}
I've seen a couple examples here and there, here's what I'm trying to achieve. I know the below does not work, but essentially I'm trying to pull the %logonserver% from a remote machine. For some reason, the wmi query does not return any data.
try
{
System.Diagnostics.ProcessStartInfo startinfo = new System.Diagnostics.ProcessStartInfo("\\\\"+txtboxMachineName.Text+"\\c$\\Windows\\System32\\cmd.exe", "/c echo %logonserver%");
startinfo.RedirectStandardOutput = true;
startinfo.UseShellExecute = false;
startinfo.CreateNoWindow = true;
System.Diagnostics.Process proc = new Process();
proc.StartInfo = startinfo;
proc.Start();
lblLogonServer.Text = proc.StandardOutput.ReadToEnd();
}
catch
{
lblLogonServer.Text = "Error has been encountered obtaining Logon Server";
}