My task is to convert wma audio stream to mp3 stream using NAudio and Lame. The below code is working fine with file name but I want it to be done with memory stream. I search in NAudio there is no method for reading wma audio stream. Is it possible with NAudio?
public static byte[] ConvertWmaToMp3(uint bitrate = 128)
{
FileStream fs = new FileStream("..\\sample.wma", FileMode.Open, FileAccess.Read);
var ws = new NAudio.WindowsMediaFormat.WMAFileReader(fs.Name);
// Setup encoder configuration
WaveLib.WaveFormat fmt = new WaveLib.WaveFormat(ws.WaveFormat.SampleRate, 16, ws.WaveFormat.Channels);
Yeti.Lame.BE_CONFIG beconf = new Yeti.Lame.BE_CONFIG(fmt, bitrate);
// Encode WAV to MP3
int blen = ws.WaveFormat.AverageBytesPerSecond;
byte[] buffer = new byte[blen];
byte[] mp3data = null;
using (MemoryStream mp3strm = new MemoryStream())
using (Mp3Writer mp3wri = new Mp3Writer(mp3strm, fmt, beconf))
{
int rc;
while ((rc = ws.Read(buffer, 0, blen)) > 0)
{
mp3wri.Write(buffer, 0, rc);
}
mp3data = mp3strm.ToArray();
}
return mp3data;
}
Currently the WMAFileReader class doesn't support reading data from a stream. The WMA APIs support reading WMA from an IStream so it is definitely possible.
If you want to implement streaming yourself you'll need to grab the source code for WmaFileReader and WmaStream from CodePlex, and use them as templates for your modified classes.
First thing you'll need is a wrapper class that provides a COM IStream interface to a .NET Stream. Here's a simple one:
public class InteropStream : IStream, IDisposable
{
public readonly Stream intern;
public InteropStream(Stream strm)
{
intern = strm;
}
~InteropStream()
{
Dispose(true);
}
public void Dispose()
{
Dispose(false);
}
protected void Dispose(bool final)
{
if (final)
intern.Dispose();
}
#region IStream Members
public void Clone(out IStream ppstm)
{
ppstm = null;
}
public void Commit(int grfCommitFlags)
{
intern.Flush();
}
readonly byte[] buffer = new byte[4096];
public void CopyTo(IStream pstm, long cb, IntPtr pcbRead, IntPtr pcbWritten)
{
if (pcbRead != IntPtr.Zero)
Marshal.WriteInt32(pcbRead, 0);
if (pcbWritten != IntPtr.Zero)
Marshal.WriteInt32(pcbWritten, 0);
}
public void LockRegion(long libOffset, long cb, int dwLockType)
{ }
public void Read(byte[] pv, int cb, IntPtr pcbRead)
{
int rc = intern.Read(pv, 0, cb);
if (pcbRead != IntPtr.Zero)
Marshal.WriteInt32(pcbRead, rc);
}
public void Revert()
{ }
public void Seek(long dlibMove, int dwOrigin, IntPtr plibNewPosition)
{
long origin = 0;
if (dwOrigin == 1) // STREAM_SEEK_CUR
origin = intern.Position;
else if (dwOrigin == 2) // STREAM_SEEK_END
origin = intern.Length;
long pos = origin + dlibMove;
intern.Position = pos;
if (plibNewPosition != IntPtr.Zero)
Marshal.WriteInt64(plibNewPosition, pos);
}
public void SetSize(long libNewSize)
{ }
public void Stat(out System.Runtime.InteropServices.ComTypes.STATSTG pstatstg, int grfStatFlag)
{
var res = new System.Runtime.InteropServices.ComTypes.STATSTG();
res.type = 2; // STGTY_STREAM
res.cbSize = intern.Length;
pstatstg = res;
}
public void UnlockRegion(long libOffset, long cb, int dwLockType)
{ }
public void Write(byte[] pv, int cb, IntPtr pcbWritten)
{ }
#endregion
}
Next copy the WmaStream code to a new namespace in your project and add the following code to the top of the class:
InteropStream interopStrm = null;
public WmaStream(Stream fileStream)
: this(fileStream, null)
{ }
public WmaStream(Stream fileStream, WaveFormat OutputFormat)
{
interopStrm = new InteropStream(fileStream);
m_reader = WM.CreateSyncReader(WMT_RIGHTS.WMT_RIGHT_NO_DRM);
try
{
IWMSyncReader2 rdr = m_reader as IWMSyncReader2;
rdr.OpenStream(interopStrm);
Init(OutputFormat);
}
catch
{
try
{
m_reader.Close();
}
catch
{
}
m_reader = null;
throw;
}
}
Do the same with WmaFileReader and the following code:
public WMAFileReader(Stream wmaStream)
{
m_wmaStream = new WmaStream2(wmaStream);
m_waveFormat = m_wmaStream.Format;
}
Now you can create an instance of your modified WmaFileReader using either a filename or a Stream instance - MemoryStream, FileStream, etc. The Stream instance needs to be readable and seekable.
I've tried the above on a few random WMA files I found on my computer, loaded into MemoryStream or using FileStream, and it works as I expected it to.
Presumably Mark is working on adding this functionality to the NAudio.Wma package, so consider this an interim fix until NAudio supports it.
The easiest way to convert WMA to MP3 with NAudio is to use the Media Foundation based classes. If you are on Windows 8 and above, and MP3 encoder should be available.
using (var reader = new MediaFoundationReader("test.wma"))
{
MediaFoundationEncoder.EncodeToMp3(reader, "test.mp3");
}
On systems without an MP3 encoder, I tend to use LAME.exe and stream the input audio through stdin. See my article here for more info on this approach.
Related
I want to transcribe the live rtmp stream using google speech to text.
In the google code mic is the input source of the audio stream but here I want to use the rtmp instead of mic.
I am reading byte array using xuggler and storing in sharedQueue.
But my code is failing with below exception.
io.grpc.StatusRuntimeException: CANCE LLED: Failed to read message.
public class DataLoader2 implements Runnable {
static ArrayList<Byte> data = new ArrayList<Byte>();
static byte[] audioChunk = new byte[1150];
static ByteBuffer buff;
private static void extractAudio(String rtmpSourceUrl) {
IMediaReader mediaReader = ToolFactory.makeReader(rtmpSourceUrl);
mediaReader.addListener(new MediaToolAdapter() {
private IContainer container;
#Override
public void onReadPacket(IReadPacketEvent event) {
event.getPacket().getByteBuffer().get(audioChunk);
try {
SpeechToText.sharedQueue.put(audioChunk);
} catch (InterruptedException e) {
}
}
#Override
public void onOpenCoder(IOpenCoderEvent event) {
buff = ByteBuffer.wrap(audioChunk);
container = event.getSource().getContainer();
}
#Override
public void onAudioSamples(IAudioSamplesEvent event) {
/*
* if (DataLoader2.data.size() < 6400) {
* DataLoader2.data.add(event.getMediaData().getByteBuffer().get()); } else {
*
* for (byte audio : DataLoader2.data) { buff.put(audio); }
*
* byte[] combined = buff.array();
*
* try { SpeechToText.sharedQueue.put(combined); } catch (InterruptedException
* e) { e.printStackTrace(); }
*
* DataLoader2.data.clear(); buff.clear(); buff = ByteBuffer.wrap(audioChunk);
*
* }
*/
// System.out.println("Event:" + event.getMediaData().getByteBuffer().get());
// SpeechToText.sharedQueue.put(event.getMediaData().getByteBuffer().get());
}
#Override
public void onClose(ICloseEvent event) {
}
});
while (mediaReader.readPacket() == null) {
}
}
#Override
public void run() {
String rtmpSourceUrl = "rtmp://localhost:1935/livewowza/xyz";
extractAudio(rtmpSourceUrl);
}
}
public class SpeechToText {
private static final int STREAMING_LIMIT = 10000; // 10 seconds
public static final String RED = "\033[0;31m";
public static final String GREEN = "\033[0;32m";
public static final String YELLOW = "\033[0;33m";
// Creating shared object
public static volatile BlockingQueue<byte[]> sharedQueue = new LinkedBlockingQueue();
private static TargetDataLine targetDataLine;
private static int BYTES_PER_BUFFER = 6400; // buffer size in bytes
private static int restartCounter = 0;
private static ArrayList<ByteString> audioInput = new ArrayList<ByteString>();
private static ArrayList<ByteString> lastAudioInput = new ArrayList<ByteString>();
private static int resultEndTimeInMS = 0;
private static int isFinalEndTime = 0;
private static int finalRequestEndTime = 0;
private static boolean newStream = true;
private static double bridgingOffset = 0;
private static boolean lastTranscriptWasFinal = false;
private static StreamController referenceToStreamController;
private static ByteString tempByteString;
private static void start() {
ResponseObserver<StreamingRecognizeResponse> responseObserver = null;
try (SpeechClient client = SpeechClient.create()) {
ClientStream<StreamingRecognizeRequest> clientStream;
responseObserver = new ResponseObserver<StreamingRecognizeResponse>() {
ArrayList<StreamingRecognizeResponse> responses = new ArrayList<>();
#Override
public void onComplete() {
System.out.println("!!!!!!!!!!!!!!!!!!!!!");
}
#Override
public void onError(Throwable arg0) {
System.out.println(arg0.getMessage());
}
#Override
public void onResponse(StreamingRecognizeResponse response) {
System.out.println("Inside onResponse ------------");
responses.add(response);
StreamingRecognitionResult result = response.getResultsList().get(0);
Duration resultEndTime = result.getResultEndTime();
resultEndTimeInMS = (int) ((resultEndTime.getSeconds() * 1000)
+ (resultEndTime.getNanos() / 1000000));
double correctedTime = resultEndTimeInMS - bridgingOffset + (STREAMING_LIMIT * restartCounter);
DecimalFormat format = new DecimalFormat("0.#");
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
if (result.getIsFinal()) {
System.out.print(GREEN);
System.out.print("\033[2K\r");
System.out.printf("%s: %s\n", format.format(correctedTime), alternative.getTranscript());
isFinalEndTime = resultEndTimeInMS;
lastTranscriptWasFinal = true;
} else {
System.out.print(RED);
System.out.print("\033[2K\r");
System.out.printf("%s: %s", format.format(correctedTime), alternative.getTranscript());
lastTranscriptWasFinal = false;
}
}
#Override
public void onStart(StreamController controller) {
referenceToStreamController = controller;
}
};
clientStream = client.streamingRecognizeCallable().splitCall(responseObserver);
RecognitionConfig recognitionConfig = RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.LINEAR16).setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build();
StreamingRecognitionConfig streamingRecognitionConfig = StreamingRecognitionConfig.newBuilder()
.setConfig(recognitionConfig).setInterimResults(true).build();
StreamingRecognizeRequest request = StreamingRecognizeRequest.newBuilder()
.setStreamingConfig(streamingRecognitionConfig).build(); // The first request in a streaming call
// has to be a config
clientStream.send(request);
System.out.println("Configuration request sent");
long startTime = System.currentTimeMillis();
while (true) {
Thread.sleep(5000);
long estimatedTime = System.currentTimeMillis() - startTime;
if (estimatedTime >= STREAMING_LIMIT) {
clientStream.closeSend(); referenceToStreamController.cancel(); // remove
if (resultEndTimeInMS > 0) { finalRequestEndTime = isFinalEndTime; }
resultEndTimeInMS = 0;
lastAudioInput = null; lastAudioInput = audioInput; audioInput = new
ArrayList<ByteString>();
restartCounter++;
if (!lastTranscriptWasFinal) { System.out.print('\n'); }
newStream = true;
clientStream =
client.streamingRecognizeCallable().splitCall(responseObserver);
request = StreamingRecognizeRequest.newBuilder().setStreamingConfig(
streamingRecognitionConfig) .build();
System.out.println(YELLOW); System.out.printf("%d: RESTARTING REQUEST\n",
restartCounter * STREAMING_LIMIT);
startTime = System.currentTimeMillis();
} else {
if ((newStream) && (lastAudioInput.size() > 0)) {
// if this is the first audio from a new request
// calculate amount of unfinalized audio from last request
// resend the audio to the speech client before incoming audio
double chunkTime = STREAMING_LIMIT / lastAudioInput.size();
// ms length of each chunk in previous request audio arrayList
if (chunkTime != 0) {
if (bridgingOffset < 0) {
// bridging Offset accounts for time of resent audio
// calculated from last request
bridgingOffset = 0;
}
if (bridgingOffset > finalRequestEndTime) {
bridgingOffset = finalRequestEndTime;
}
int chunksFromMS = (int) Math.floor((finalRequestEndTime - bridgingOffset) / chunkTime);
// chunks from MS is number of chunks to resend
bridgingOffset = (int) Math.floor((lastAudioInput.size() - chunksFromMS) * chunkTime);
// set bridging offset for next request
for (int i = chunksFromMS; i < lastAudioInput.size(); i++) {
request = StreamingRecognizeRequest.newBuilder().setAudioContent(lastAudioInput.get(i))
.build();
clientStream.send(request);
}
}
newStream = false;
}
tempByteString = ByteString.copyFrom(sharedQueue.take());
request = StreamingRecognizeRequest.newBuilder().setAudioContent(tempByteString).build();
audioInput.add(tempByteString);
}
clientStream.send(request);
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
DataLoader2 dataLoader = new DataLoader2();
Thread t = new Thread(dataLoader);
t.start();
SpeechToText.start();
}
}
FFmpeg command for pcm encoding.
ffmpeg -i rtmp://localhost:1935/liveapp/abc -c:a pcm_s16le -ac 1 -ar 16000 -f flv rtmp://localhost:1935/livewowza/xyz
I'm basically trying to record audio chunks from a webrtc stream, I've been able to send the binary data with help of this resource HTML Audio Capture streaming to Node.js.
Im using netty-socketio, as this library plays well with socket-io on the client side.
Here are my server endpoints:
server.addEventListener("audio-blob", byte[].class, (socketIOClient, bytes, ackRequest) -> {
byteArrayList.add(bytes);
});
server.addEventListener("audio-blob-end", Object.class, (socket, string, ackRequest) -> {
ByteArrayInputStream in = new ByteArrayInputStream(byteArrayList.getArray());
AudioInputStream audiIn = new AudioInputStream(in, getAudioFormat(), 48000l);
AudioFileFormat.Type fileType = AudioFileFormat.Type.WAVE;
File wavFile = new File("RecordAudio.wav");
AudioSystem.write(audiIn,fileType,wavFile);
});
The format settings:
public static AudioFormat getAudioFormat() {
float sampleRate = 48000;
int sampleSizeInBits = 8;
int channels = 2;
boolean signed = true;
boolean bigEndian = true;
AudioFormat format = new AudioFormat(sampleRate, sampleSizeInBits,
channels, signed, bigEndian);
return format;
}
Im using this class to collect the byte arrays (and yes I know the risk with this solution)
class ByteArrayList {
private List<Byte> bytesList;
public ByteArrayList() {
bytesList = new ArrayList<Byte>();
}
public void add(byte[] bytes) {
add(bytes, 0, bytes.length);
}
public void add(byte[] bytes, int offset, int length) {
for (int i = offset; i < (offset + length); i++) {
bytesList.add(bytes[i]);
}
}
public int size(){
return bytesList.size();
}
public byte[] getArray() {
byte[] bytes = new byte[bytesList.size()];
for (int i = 0; i < bytesList.size(); i++) {
bytes[i] = bytesList.get(i);
}
return bytes;
}
}
The generated wav file only plays noise though, no recording is present. What am I doing wrong?
When Google:ing around for answers I stumbled upon this resource how to save a wav file
What I was doing wrong is that I had a fixed size on the AudioInputstream constructor parameter:
new AudioInputStream(in, getAudioFormat(), 48000l)
changed it to the:
new AudioInputStream(in, getAudioFormat(),byteArrayList.getArray().length);
I am using below code in asp.net webapi for audio streaming to allow android application to call api and play songs.
public class AudioController : ApiController
{
ITrackRepository _TrackRepo = new TrackRepository();
public AudioController()
{
}
public HttpResponseMessage Get(int id)
{
var trackd = _TrackRepo.GetPlayfilepath(id);
string filename = System.Web.HttpContext.Current.Server.MapPath(trackd.Select(x => x.FilePath).FirstOrDefault());
var audio = new AudioStream(filename);
string fileExtension = System.IO.Path.GetExtension(filename);
var response = Request.CreateResponse();
response.Content = new PushStreamContent(audio.WriteToStream, new MediaTypeHeaderValue("audio/" + fileExtension));
return response;
}
}
public class AudioStream
{
private readonly string _filename;
public AudioStream(string filename)
{
_filename = filename;
}
public async void WriteToStream(Stream outputStream, HttpContent content, TransportContext context)
{
try
{
var buffer = new byte[65536];
using (var video = File.Open(_filename, FileMode.Open, FileAccess.Read))
{
var length = (int)video.Length;
var bytesRead = 1;
while (length > 0 && bytesRead > 0)
{
bytesRead = video.Read(buffer, 0, Math.Min(length, buffer.Length));
await outputStream.WriteAsync(buffer, 0, bytesRead);
length -= bytesRead;
}
}
}
catch (HttpException ex)
{
return;
}
finally
{
outputStream.Close();
}
}
}
I am able to play song, i tested it with vlc (stream) and its working fine, Issue is when i try to play same song in parallel in another player its giving me below error
The process cannot access the file because it is being used by another
process.
I completely understand error, but i am not able to find any satisfactory solution.
one solution is to create a copy of song before play/stream and delete it on completion but i don't think that is a good solution.
please suggest.
I'm not at a computer with visual studio on it at the moment, but look at this overload for File.Open
Try changing this line:
using (var video = File.Open(_filename, FileMode.Open, FileAccess.Read))
to
using (var video = File.Open(_filename, FileMode.Open, FileAccess.Read, FileShare.Read))
See if that helps.
Background
I have a RichTextBox control I am using essentially like a console in my WinForms application. Currently my application-wide logger posts messages using delegates and one of the listeners is this RTB. The logger synchronously sends lots of short (less than 100 char) strings denoting event calls, status messages, operation results, etc.
Posting lots of these short messages to the RTB using BeginInvoke provides UI responsiveness until heavy parallel processing starts logging lots of messages and then the UI starts posting items out of order, or the text is far behind (hundreds of milliseconds). I know this because when the processing slows down or is stopped, the console keeps writing for some time afterwords.
My temporary solution was to invoke the UI synchronously and add a blocking collection buffer. Basically taking the many small items from the Logger and combining them in a stringbuilder to be posted in aggregate to the RTB. The buffer posts items as they come if the UI can keep up, but if the queue gets too high, then it aggregates them and then posts to the UI. The RTB is thus updated piece-meal and looks jumpy when lots of things are being logged.
Question
How can I run a RichTextBox control on its own UI thread to keep other buttons on the same Form responsive during frequent but small append operations? From research, I think I need to run an STA thread and call Application.Run() on it to put the RTB on its own thread, but the examples I found lacked substantive code samples and there don't seem to be any tutorials (perhaps because what I want to do is ill advised?). Also I wasn't sure if there where any pitfalls for a single Control being on its own thread relative to the rest of the Form. (ie. Any issues closing the main form or will the STA thread for the RTB just die with the form closing? Any special disposing? etc.)
This should demonstrate the issue once you add 3 Buttons and a RichTextBox to the form. What I essentially want to accomplish is factoring away the BufferedConsumer by having the RTB on its own thread. Most of this code was hacked out verbatim from my main application, so yes, it is ugly.
using System;
using System.Collections.Concurrent;
using System.Diagnostics;
using System.Drawing;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
// Fields
private int m_taskCounter;
private static CancellationTokenSource m_tokenSource;
private bool m_buffered = true;
private static readonly object m_syncObject = new object();
// Properties
public IMessageConsole Application_Console { get; private set; }
public BufferedConsumer<StringBuilder, string> Buffer { get; private set; }
public Form1()
{
InitializeComponent();
m_tokenSource = new CancellationTokenSource();
Application_Console = new RichTextBox_To_IMessageConsole(richTextBox1);
Buffer =
new BufferedConsumer<StringBuilder, string>(
p_name: "Console Buffer",
p_appendBuffer: (sb, s) => sb.Append(s),
p_postBuffer: (sb) => Application_Console.Append(sb));
button1.Text = "Start Producer";
button2.Text = "Stop All";
button3.Text = "Toggle Buffering";
button1.Click += (o, e) => StartProducerTask();
button2.Click += (o, e) => CancelAllProducers();
button3.Click += (o, e) => ToggleBufferedConsumer();
}
public void StartProducerTask()
{
var Token = m_tokenSource.Token;
Task
.Factory.StartNew(() =>
{
var ThreadID = Interlocked.Increment(ref m_taskCounter);
StringBuilder sb = new StringBuilder();
var Count = 0;
while (!Token.IsCancellationRequested)
{
Count++;
sb.Clear();
sb
.Append("ThreadID = ")
.Append(ThreadID.ToString("000"))
.Append(", Count = ")
.AppendLine(Count.ToString());
if (m_buffered)
Buffer
.AppendCollection(sb.ToString()); // ToString mimicks real world Logger passing strings and not stringbuilders
else
Application_Console.Append(sb);
Sleep.For(1000);
}
}, Token);
}
public static void CancelAllProducers()
{
lock (m_syncObject)
{
m_tokenSource.Cancel();
m_tokenSource = new CancellationTokenSource();
}
}
public void ToggleBufferedConsumer()
{
m_buffered = !m_buffered;
}
}
public interface IMessageConsole
{
// Methods
void Append(StringBuilder p_message);
}
// http://stackoverflow.com/a/5706085/1718702
public class RichTextBox_To_IMessageConsole : IMessageConsole
{
// Constants
private const int WM_USER = 0x400;
private const int WM_SETREDRAW = 0x000B;
private const int EM_GETEVENTMASK = WM_USER + 59;
private const int EM_SETEVENTMASK = WM_USER + 69;
private const int EM_GETSCROLLPOS = WM_USER + 221;
private const int EM_SETSCROLLPOS = WM_USER + 222;
//Imports
[DllImport("user32.dll")]
private static extern IntPtr SendMessage(IntPtr hWnd, Int32 wMsg, Int32 wParam, ref Point lParam);
[DllImport("user32.dll")]
private static extern IntPtr SendMessage(IntPtr hWnd, Int32 wMsg, Int32 wParam, IntPtr lParam);
// Fields
private RichTextBox m_richTextBox;
private bool m_attachToBottom;
private Point m_scrollPoint;
private bool m_painting;
private IntPtr m_eventMask;
private int m_suspendIndex = 0;
private int m_suspendLength = 0;
public RichTextBox_To_IMessageConsole(RichTextBox p_richTextBox)
{
m_richTextBox = p_richTextBox;
var h = m_richTextBox.Handle;
m_painting = true;
m_richTextBox.DoubleClick += RichTextBox_DoubleClick;
m_richTextBox.MouseWheel += RichTextBox_MouseWheel;
}
// Methods
public void SuspendPainting()
{
if (m_painting)
{
m_suspendIndex = m_richTextBox.SelectionStart;
m_suspendLength = m_richTextBox.SelectionLength;
SendMessage(m_richTextBox.Handle, EM_GETSCROLLPOS, 0, ref m_scrollPoint);
SendMessage(m_richTextBox.Handle, WM_SETREDRAW, 0, IntPtr.Zero);
m_eventMask = SendMessage(m_richTextBox.Handle, EM_GETEVENTMASK, 0, IntPtr.Zero);
m_painting = false;
}
}
public void ResumePainting()
{
if (!m_painting)
{
m_richTextBox.Select(m_suspendIndex, m_suspendLength);
SendMessage(m_richTextBox.Handle, EM_SETSCROLLPOS, 0, ref m_scrollPoint);
SendMessage(m_richTextBox.Handle, EM_SETEVENTMASK, 0, m_eventMask);
SendMessage(m_richTextBox.Handle, WM_SETREDRAW, 1, IntPtr.Zero);
m_painting = true;
m_richTextBox.Invalidate();
}
}
public void Append(StringBuilder p_message)
{
var WatchDogTimer = Stopwatch.StartNew();
var MinimumRefreshRate = 2000;
m_richTextBox
.Invoke((Action)delegate
{
// Last resort cleanup
if (WatchDogTimer.ElapsedMilliseconds > MinimumRefreshRate)
{
// m_richTextBox.Clear(); // Real-world behaviour
// Sample App behaviour
Form1.CancelAllProducers();
}
// Stop Drawing to prevent flickering during append and
// allow Double-Click events to register properly
this.SuspendPainting();
m_richTextBox.SelectionStart = m_richTextBox.TextLength;
m_richTextBox.SelectedText = p_message.ToString();
// Cap out Max Lines and cut back down to improve responsiveness
if (m_richTextBox.Lines.Length > 4000)
{
var NewSet = new string[1000];
Array.Copy(m_richTextBox.Lines, 1000, NewSet, 0, 1000);
m_richTextBox.Lines = NewSet;
m_richTextBox.SelectionStart = m_richTextBox.TextLength;
m_richTextBox.SelectedText = "\r\n";
}
this.ResumePainting();
// AutoScroll down to display newest text
if (m_attachToBottom)
{
m_richTextBox.SelectionStart = m_richTextBox.Text.Length;
m_richTextBox.ScrollToCaret();
}
});
}
// Event Handler
void RichTextBox_DoubleClick(object sender, EventArgs e)
{
// Toggle
m_attachToBottom = !m_attachToBottom;
// Scroll to Bottom
if (m_attachToBottom)
{
m_richTextBox.SelectionStart = m_richTextBox.Text.Length;
m_richTextBox.ScrollToCaret();
}
}
void RichTextBox_MouseWheel(object sender, MouseEventArgs e)
{
m_attachToBottom = false;
}
}
public class BufferedConsumer<TBuffer, TItem> : IDisposable
where TBuffer : new()
{
// Fields
private bool m_disposed = false;
private Task m_consumer;
private string m_name;
private CancellationTokenSource m_tokenSource;
private AutoResetEvent m_flushSignal;
private BlockingCollection<TItem> m_queue;
// Constructor
public BufferedConsumer(string p_name, Action<TBuffer, TItem> p_appendBuffer, Action<TBuffer> p_postBuffer)
{
m_name = p_name;
m_queue = new BlockingCollection<TItem>();
m_tokenSource = new CancellationTokenSource();
var m_token = m_tokenSource.Token;
m_flushSignal = new AutoResetEvent(false);
m_token
.Register(() => { m_flushSignal.Set(); });
// Begin Consumer Task
m_consumer = Task.Factory.StartNew(() =>
{
//Handler
// .LogExceptions(ErrorResponse.SupressRethrow, () =>
// {
// Continuously consumes entries added to the collection, blocking-wait if empty until cancelled
while (!m_token.IsCancellationRequested)
{
// Block
m_flushSignal.WaitOne();
if (m_token.IsCancellationRequested && m_queue.Count == 0)
break;
// Consume all queued items
TBuffer PostBuffer = new TBuffer();
Console.WriteLine("Queue Count = " + m_queue.Count + ", Buffering...");
for (int i = 0; i < m_queue.Count; i++)
{
TItem Item;
m_queue.TryTake(out Item);
p_appendBuffer(PostBuffer, Item);
}
// Post Buffered Items
p_postBuffer(PostBuffer);
// Signal another Buffer loop if more items were Queued during post sequence
var QueueSize = m_queue.Count;
if (QueueSize > 0)
{
Console.WriteLine("Queue Count = " + QueueSize + ", Sleeping...");
m_flushSignal.Set();
if (QueueSize > 10 && QueueSize < 100)
Sleep.For(1000, m_token); //Allow Queue to build, reducing posting overhead if requests are very frequent
}
}
//});
}, m_token, TaskCreationOptions.LongRunning, TaskScheduler.Default);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool p_disposing)
{
if (!m_disposed)
{
m_disposed = true;
if (p_disposing)
{
// Release of Managed Resources
m_tokenSource.Cancel();
m_flushSignal.Set();
m_consumer.Wait();
}
// Release of Unmanaged Resources
}
}
// Methods
public void AppendCollection(TItem p_item)
{
m_queue.Add(p_item);
m_flushSignal.Set();
}
}
public static partial class Sleep
{
public static bool For(int p_milliseconds, CancellationToken p_cancelToken = default(CancellationToken))
{
//p_milliseconds
// .MustBeEqualOrAbove(0, "p_milliseconds");
// Exit immediate if cancelled
if (p_cancelToken != default(CancellationToken))
if (p_cancelToken.IsCancellationRequested)
return true;
var SleepTimer =
new AutoResetEvent(false);
// Cancellation Callback Action
if (p_cancelToken != default(CancellationToken))
p_cancelToken
.Register(() => SleepTimer.Set());
// Block on SleepTimer
var Canceled = SleepTimer.WaitOne(p_milliseconds);
return Canceled;
}
}
}
Posting answer as per the OP's request:
You can integrate my example of a Virtualized, High-Performance, Rich, highly customizable WPF log Viewer in your existing winforms application by using the ElementHost
Full source code in the link above
I am working on implementing a SAMLSLO through HTTP-REDIRECT binding mechanism. Using deflate-inflate tools gives me a DataFormatException with incorrect header check.
I tried this as a stand-alone. Though I did not get DataFormatException here I observed the whole message is not being returned.
import java.io.UnsupportedEncodingException;
import java.util.logging.Level;
import java.util.zip.DataFormatException;
import java.util.zip.Deflater;
import java.util.zip.Inflater;
public class InflateDeflate {
public static void main(String[] args) {
String source = "This is the SAML String";
String outcome=null;
byte[] bytesource = null;
try {
bytesource = source.getBytes("UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
int byteLength = bytesource.length;
Deflater compresser = new Deflater();
compresser.setInput(bytesource);
compresser.finish();
byte[] output = new byte[byteLength];
int compressedDataLength = compresser.deflate(output);
outcome = new String(output);
String trimmedoutcome = outcome.trim();
//String trimmedoutcome = outcome; // behaves the same way as trimmed;
// Now try to inflate it
Inflater decompresser = new Inflater();
decompresser.setInput(trimmedoutcome.getBytes());
byte[] result = new byte[4096];
int resultLength = 0;
try {
resultLength = decompresser.inflate(result);
} catch (DataFormatException e) {
e.printStackTrace();
}
decompresser.end();
System.out.println("result length ["+resultLength+"]");
String outputString = null;
outputString = new String(result, 0, resultLength);
String returndoc = outputString;
System.out.println(returndoc);
}
}
Surprisingly I get the result as [22] bytes, the original is [23] bytes and the 'g' is missing after inflating.
Am I doing something fundamentally wrong here?
Java's String is a CharacterSequence (a character is 2 bytes). Using new String(byte[]) may not correctly convert your byte[] to a String representation. At least you should specify a character encoding new String(byte[], "UTF-8") to prevent invalid character conversions.
Here's an example of compressing and decompressing:
import java.util.zip.Deflater;
import java.util.zip.InflaterInputStream;
...
byte[] sourceData; // bytes to compress (reuse byte[] for compressed data)
String filename; // where to write
{
// compress the data
Deflater deflater = new Deflater(Deflater.DEFAULT_COMPRESSION);
deflater.setInput(sourceData);
deflater.finish();
int compressedSize = deflater.deflate(data, 0, sourceData.length, Deflater.FULL_FLUSH);
// write the data
OutputStream stream = new FileOutputStream(filename);
stream.write(data, 0, compressedSize);
stream.close();
}
{
byte[] uncompressedData = new byte[1024]; // where to store the data
// read the data
InputStream stream = new InflaterInputStream(new FileInputStream(filename));
// read data - note: may not read fully (or evenly), read from stream until len==0
int len, offset = 0;
while ((len = stream.read(uncompressedData , offset, uncompressedData .length-offset))>0) {
offset += len;
}
stream.close();
}