I am starting to get into hybrid mixing.
I’m using a Steinberg UR824 (24 bit) interface and Cubase Pro 12 using an effects loop to go through a Warm Audio 1176 and WA2A (LA2A clone) compressors.
I’m working in Cubase at 32 bit float. Should I dither before the output to the effects loop? Is this going to degrade the overall quality of the sound using this setup? Should I be working in 24 bit in Cubase? Should I upgrade my interface to 32 bit?. Any other suggestions?
Many thanks
Related
My goal is to stream a USB-Webcam from a Raspberry using VLC. The generated stream should be able to be shown using simple HTML visible on the most popular browsers.
So I use a simple ""-object in my HTML:
<video id="video" src="http://quarkcam:8080" autoplay="true" width="800" height="600" controls>/video>
The vlc-Command to create the Stream is the following: (using OGG which seemed to be the right choice for compatibility (feel free to correct me on this))
cvlc v4l2:///dev/video0 :v4l2-standard= :v4l2-width=800 :v4l2-height=600 :live-caching=100 :sout="#transcode{vcodec=theo,vb=800,acodec=vorb,ab=128,channels=2,samplerate=4410,scodec=none,fps=15}:http{mux=ogg,dst=:8080/}" :no-sout-all :sout-keep
While this works technically I have to reduce the resolution to 800x600 and framerate to 15fps on a Raspberry Pi 4 to make this work without constantly buffering. (theoretical maximum from webcam: 30fps on 1600x1200)
Are there better options for vcodec Codecs that can be used that would provide a stream that better fits to the Pi's hardware capabilities and STILL can be simply included into am HTML-page? I do NOT need to get the maximum available from the hardware but would at least be able to have a stable 30fps stream.
Raspberry Pi 4, while much powerful than previous iterations, is still not capable of decoding video and parallelly stream it, at least on the default Pi OS.
I suggest you to use MotionEyeOS - https://github.com/motioneye-project/motioneyeos. It is supposed to work on Raspberry Pi 4 with Raspbian OS kernel: 4.19 (raspbian)
https://github.com/motioneye-project/motioneyeos/wiki/Installation
A well documented tutorial is given here: https://randomnerdtutorials.com/install-motioneyeos-on-raspberry-pi-surveillance-camera-system/#:~:text=What%20is%20MotionEyeOS%3F,Raspberry%20Pi%20(all%20versions)%3B
Thank You,
Have a Great Day
Naveen
I am currently working on a Raspberry pi (Jessie Stretch), the issue is that I want to connect two FTDI FT2232H serially at 12 Mbps, but because 12Mbps is not a standard speed Raspbian does not allow me to add that baud rate. I would like to know if someone has transmitted at that speed or if someone knows how to achieve the Bit rate of 12 Mbps with the maximum baud rate in Raspbian (4,000,000) .
PS: I changed the UART clock to 64,000,000, modified the "termbits.h" library and created termios structures, but nothing worked.
Thanks.
The data sheet for the FT2232H does advertise it supports 12 Mbaud (not 12Mbps). But it looks like it comes in different modules with support for RS232, RS422, and RS485. The most typical being RS232.
I've never heard of anyone operating a RS232 connection at 120000000 baud. The typical maximum that almost everything supports is 115200. The highest I've seen is 921600. Typical RS232 cables started running into interference issues at the higher baud rates.
I suspect the 12Mbaud spec is for RS422/RS485 operation which requires different cabling and is designed for higher speeds.
If you're using an FT2232H with RS232, the speeds you're looking for are likely unrealistic. If you're using it with RS422/RS485 you can probably get there, but it will be a much more specialized endeavor. It does look like Linux does support RS485. But there's not nearly as much documentation out there as for RS232.
Can you provide any more information about the USB adapters you're using?
I had a assignment for college where we needed to play a precompiled wav as integer array through the PWM and DAC. Now, I wanted more of a challenge, so I went out of my way and created a audio dac over usb using the micro controller in question: The STM32F051. It basically listens to my soundcard output using a wasapi loopback recorder, changes the resolution from 16 to 12 bit (since the dac on the stm32 only has a 12 bit resolution) and sends it over using usart using 10x sample rate as baud rate (in my case 960000). All done in C#.
On the microcontroller I simply use a interrupt for usart and push the received data to the dac.
It works pretty well, much better than PWM, and at a decent sample frequency of 48kHz.
But... here it comes.. When there is some (mostly) high pitch symphonic melody it starts to sound "wobbly".
Here a video where you can hear it: https://youtu.be/xD3uTP9etuA?t=88
I read up on the internet a bit about DIY dac's and someone somewhere (don't remember where) mentioned that MCU's in general have interrupt jitter. So may basic question is: Is interrupt jitter actually causing this? If so, are there ways to limit the jitter happening?
Or is this something entirely different?
I am thinking of trying to compact the pcm data send over serial (as said before, resolution of 12 bits, but are sent in packet of 2 8bits forming 16bits, hence twice the samplerate as the baud rate, so my plan is trying to shift 12 bits to the MSB and adding four bits of the next 12 bit value to the current 16 bit variable, hence only needing 12 transfers instead of 16 per 8 samples. Might read upon more efficient ways of compacting data for transport.), put the samples in a buffer and then use another timer that triggers at 48kHz for sending the samples to the dac. Would this concept work? Or would I just waste time?
For code, here is the project: https://github.com/EldinZenderink/SoundOverSerial
I'm looking into rendering frames at a high rate (ideally next to the max monitor rate) and I was wondering if anyone had any idea at what level I should start looking into: kernel/driver level (OS space) ? X11 level ? svgalib (userspace) ?
On a modern computer, you can do it using the ordinary tools and APIs for graphics. If you have full frames full of random pixels, a simple bit blit from an in-memory buffer will perform more than adequately. Without any optimization work, I found that I could generate more than 500 frames per second on Windows XP using 2008 PCs.
I would like to play 32 bit audio from my computer. Is this possible? I know about "AL_EXT_FLOAT32" extension, but does any hardware/windows even support this? And if there is support for it, will it just be reconverted and played as 16 bit audio?
Is it possible to play 32 bit audio from a PC with consumer hardware?
As far as I'm aware, most consumer hardware only supports 16-bit audio output. Some of the premium sound devices sometimes support up to 24-bit. Most digital audio systems support 16 and 24-bit PCM steams. I have not seen consumer devices which support 32-bit PCM.
Most likely windows will just scale it down or, with some devices, will crash the sound driver (remember some Sound-Blasters on XP).
Many music formats can be stored in (normalized) 32-bit floats, and will quite possibly be processed by OpenAL or Operating system's audio mixer in 32-bit floats. The mixer then sends the data to the driver, which is usually 16-bit.
I don't know OpenAL, but I think that format is about the input audio format (or the internal mixer?), not the audio output it will produce?
That, or it supports high-end professional equipment?