FUSE fseek unexpected behaviour with direct_io - fuse

I'm trying to write a FUSE filesystem that presents streamable music as mp3 files. I don't want to start to stream the audio when just the ID3v1.1 tag is read, so I mount the filesystem with direct_io and max_readahead=0.
But when I do this (which is also what libid3tag does), I get reads of 2752 bytes with offset -2880 bytes from the end:
char tmp[255];
FILE* f = fopen("foo.mp3", "r");
fseek(f, -128, SEEK_END);
fread(tmp, 1, 10, f);
Why is this? I expect to get a call to read with an offset exactly 128 bytes from the end with size 10..
The amount of bytes read seems to vary somewhat.

I've had similar issue and filed an issue with s3fs. Checkout issue : http://code.google.com/p/s3fs/issues/detail?can=2&q=&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary&groupby=&sort=&id=241
additionally, checkout line 1611 in the s3fs.cpp:
http://code.google.com/p/s3fs/source/browse/trunk/src/s3fs.cpp?r=316
// error check this
// fseek (pSourceFile , 0 , SEEK_END);

Related

Is partial data read possible from a pipe due to delay in writing?

I have a situation where one process is writing 512 bytes of data to a pipe every 100ms, and another process is continuously reading from the same pipe. It took three read operation to read complete 512 bytes of data from pipe. The first read operation returns only a portion of the data (e.g. 100 bytes), and the next read returns zero bytes without any error set and the last read operation reads the remaining data left in the pipe (e.g. 412 bytes).
I assume that the write operation is atomic as the 512 bytes is lesser than PIPE_BUF bytes. If so, is it possible for a read operation to return partial data, especially it took three reads for reading 512 bytes and the second read returned zero bytes.
// Create the named pipe
if (mkfifo("/tmp/my_pipe", 0666) == -1) {
perror("mkfifo");
return 1;
}
// Open the named pipe for reading and writing with non-
// blocking behavior
if ((fd = open("/tmp/my_pipe", O_RDWR | O_NONBLOCK)) == -1) {
perror("open");
return 1;
}
// Open the named pipe for reading with non-blocking behavior
if ((fd = open("/tmp/my_pipe", O_RDONLY | O_NONBLOCK)) == -1) {
perror("open");
return 1;
}
The man page for pipe (i.e., man 7 pipe) says
The communication channel provided by a pipe is a byte stream: there
is no concept of message boundaries.
This terse statement implies that reads can see what you call 'partial data' from the pipe; rather that there is no such thing as partial data, because it's a byte stream.
For what it's worth, TCP has the same feature.

How to detect a basic audio signal into a much bigger one (mpg123 output signal)

I am new to signal processing and I don't really understand the basics (and more). Sorry in advance for any mistake into my understanding so far.
I am writing C code to detect a basic signal (18Hz simple sinusoid 2 sec duration, generating it using Audacity is pretty simple) into a much bigger mp3 file. I read the mp3 file and copy it until I match the sound signal.
The signal to match is { 1st channel: 18Hz sin. signal , 2nd channel: nothing/doesn't matter).
To match the sound, I am calculating the frequency of the mp3 until I find a good percentage of 18Hz freq. during ~ 2 sec. As this frequency is not very common, I don't have to match it very precisely.
I used mpg123 to convert my file, I fill the buffers with what it returns. I initialised it to convert the mp3 to Mono RAW audio:
init:
int ret;
const long *rates;
size_t rate_count, i;
mpg123_rates(&rates, &rate_count);
mpg123_handle *m = mpg123_new(NULL, &ret);
mpg123_format_none(m);
for(i=0; i<rate_count; ++i)
mpg123_format(m, rates[i], MPG123_MONO, MPG123_ENC_SIGNED_32);
if(m == NULL)
{
//err
} else {
mpg123_open_feed(m);
}
(...)
unsigned char out[8*MAX_MP3_BUF_SIZE];
ret = mpg123_decode(m, buf->data, buf->size, out, 8*MAX_MP3_BUF_SIZE, &size);
`(...)
unsigned char out[8*MAX_MP3_BUF_SIZE];
ret = mpg123_decode(m, buf->data, buf->size, out, 8*MAX_MP3_BUF_SIZE, &size);
(...) `
But I have to idea how to get the resulting buffer to calculate the FFT to get the frequency.
//FREQ Calculation with libfftw3
int transform_size = MAX_MP3_BUF_SIZE * 2;
fftw_complex *fftout = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * transform_size);
fftw_complex *fftin = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * transform_size);
fftw_plan p = fftw_plan_dft_r2c_1d(transform_size, fftin, fftout, FFTW_ESTIMATE);
I can get a good RAW Audio (PCM ?) into a buffer (if I write it, it can be read and converted into wave with sox:
sox --magic -r 44100 -e signed -b 32 -c 1 rps.raw rps.wav
Any help is appreciated. My knowledge of signal processing is poor, I am not even sure of what to do with the FFT to get the frequency of the signal. Code is just fyi, it is contained into a much bigger project (for which a simple grep is not an option)
Don't use MP3 for this. There's a good chance your 18 Hz will disappear or at least become distorted. 18 Hz is will below audible. MP3 and other lossy algorithms use a variety of techniques to remove sounds that we're not going to hear.
Assuming PCM, since you only need one frequency band, consider using the Goertzel algorithm. This is more efficient than FFT/DFT for your use case.

determining the optimal buffer size for file read in linux

I am writing a C program which reads from stdin and writes to stdout. But it buffers the data so that a write is performed only after it reads a specific number of bytes(=SIZE)
#include<stdio.h>
#include<stdlib.h>
#define SIZE 100
int main()
{
char buf[SIZE];
int n=0;
//printf("Block size = %d\n", BUFSIZ);
while( ( n = read(0, buf, sizeof(buf)) ) > 0 )
write(1, buf, n);
exit(0);
}
Iam running this program on a Ubuntu 18.04 hosted on Oracle Virtual Box(4GB RAM, 2 cores), and testing the program for different values of buffer size. I have redirected the standard input to come from a file(which contains random numbers created dynamically) and standard output to go to /dev/null. Here is the shell script used to run the test:
#!/bin/bash
# $1 - step size (bytes)
# $2 - start size (bytes)
# $3 - stop size (bytes)
echo "Changing buffer size from $2 to $3 in steps of $1, and measuring time for copying."
buff_size=$2
echo "Test Data" >testData
echo "Step Size:(doubles from previous size) Start Size:$2 Stop Size:$3" >>testData
while [ $buff_size -le $3 ]
do
echo "" >>testData
echo -n "$buff_size," >>testData
gcc -DSIZE=$buff_size copy.c # Compile the program for cat, with new buffer size
dd bs=1000 count=1000000 </dev/urandom >testFile #Create testFile with random data of 1GB
(/usr/bin/time -f "\t%U, \t%S," ./a.out <testFile 1>/dev/null) 2>>testData
buff_size=$(($buff_size * 2))
rm -f a.out
rm -f testFile
done
I am measuring the time taken to execute the program and tabulate it. A test run produces the following data:
Test Data
Step Size:(doubles from previous size) Start Size:1 Stop Size:524288
1, 5.94, 17.81,
2, 5.53, 18.37,
4, 5.35, 18.37,
8, 5.58, 18.78,
16, 5.45, 18.96,
32, 5.96, 19.81,
64, 5.60, 18.64,
128, 5.62, 17.94,
256, 5.37, 18.33,
512, 5.70, 18.45,
1024, 5.43, 17.45,
2048, 5.22, 17.95,
4096, 5.57, 18.14,
8192, 5.88, 17.39,
16384, 5.39, 18.64,
32768, 5.27, 17.78,
65536, 5.22, 17.77,
131072, 5.52, 17.70,
262144, 5.60, 17.40,
524288, 5.96, 17.99,
I dont see any significant variation in user+system time as we use a different block size. But theoretically, as the block size becomes smaller, many number of system calls are generated for the same file size, and it should take more time to execute. I have seen test results in the book 'Advanced Programming in Unix Environment' by Richard Stevens for a similar test, which shows that user+system time reduces significantly if the buffer size used in copy is close to block size.(In my case, block size is 4096 bytes on an ext4 partition)
Why am i not able to reproduce these results? Am i missing some factors in these tests?
You did not disable the line #define SIZE 100 in your source code so the definition via option (-DSIZE=1000) does have influence only above this #define. On my compiler I get a warning for this (<command-line>:0:0: note: this is the location of the previous definition) at compile time.
If you comment out the #define you should be able to fix this error.
Another aspect which comes to mind:
If you create a file on a machine and read it right away after that, it will be in the OS's disk cache (which is large enough to store all of this file), so the actual disk block size won't have much of an influence here.
Stevens's book was written in 1992 when RAM was way more expensive than today, so maybe some information in there is outdated. I also doubt that newer editions of the book have taken things like these out because in general they are still true.

How can I detect whether a WAV file has a 44 or 46-byte header?

I've discovered it is dangerous to assume that all PCM wav audio files have 44 bytes of header data before the samples begin. Though this is common, many applications (ffmpeg for example), will generate wavs with a 46-byte header and ignoring this fact while processing will result in a corrupt and unreadable file. But how can you detect how long the header actually is?
Obviously there is a way to do this, but I searched and found little discussion about this. A LOT of audio projects out there assume 44 (or conversely, 46) depending on the authors own context.
You should be checking all of the header data to see what the actual sizes are. Broadcast Wave Format files will contain an even larger extension subchunk. WAV and AIFF files from Pro Tools have even more extension chunks that are undocumented as well as data after the audio. If you want to be sure where the sample data begins and ends you need to actually look for the data chunk ('data' for WAV files and 'SSND' for AIFF).
As a review, all WAV subchunks conform to the following format:
Subchunk Descriptor (4 bytes)
Subchunk Size (4 byte integer, little endian)
Subchunk Data (size is Subchunk Size)
This is very easy to process. All you need to do is read the descriptor, if it's not the one you are looking for, read the data size and skip ahead to the next. A simple Java routine to do that would look like this:
//
// Quick note for people who don't know Java well:
// 'in.read(...)' returns -1 when the stream reaches
// the end of the file, so 'if (in.read(...) < 0)'
// is checking for the end of file.
//
public static void printWaveDescriptors(File file)
throws IOException {
try (FileInputStream in = new FileInputStream(file)) {
byte[] bytes = new byte[4];
// Read first 4 bytes.
// (Should be RIFF descriptor.)
if (in.read(bytes) < 0) {
return;
}
printDescriptor(bytes);
// First subchunk will always be at byte 12.
// (There is no other dependable constant.)
in.skip(8);
for (;;) {
// Read each chunk descriptor.
if (in.read(bytes) < 0) {
break;
}
printDescriptor(bytes);
// Read chunk length.
if (in.read(bytes) < 0) {
break;
}
// Skip the length of this chunk.
// Next bytes should be another descriptor or EOF.
int length = (
Byte.toUnsignedInt(bytes[0])
| Byte.toUnsignedInt(bytes[1]) << 8
| Byte.toUnsignedInt(bytes[2]) << 16
| Byte.toUnsignedInt(bytes[3]) << 24
);
in.skip(Integer.toUnsignedLong(length));
}
System.out.println("End of file.");
}
}
private static void printDescriptor(byte[] bytes)
throws IOException {
String desc = new String(bytes, "US-ASCII");
System.out.println("Found '" + desc + "' descriptor.");
}
For example here is a random WAV file I had:
Found 'RIFF' descriptor.
Found 'bext' descriptor.
Found 'fmt ' descriptor.
Found 'minf' descriptor.
Found 'elm1' descriptor.
Found 'data' descriptor.
Found 'regn' descriptor.
Found 'ovwf' descriptor.
Found 'umid' descriptor.
End of file.
Notably, here both 'fmt ' and 'data' legitimately appear in between other chunks because Microsoft's RIFF specification says that subchunks can appear in any order. Even some major audio systems that I know of get this wrong and don't account for that.
So if you want to find a certain chunk, loop through the file checking each descriptor until you find the one you're looking for.
The trick is to look at the "Subchunk1Size", which is a 4-byte integer beginning at byte 16 of the header. In a normal 44-byte wav, this integer will be 16 [10, 0, 0, 0]. If it's a 46-byte header, this integer will be 18 [12, 0, 0, 0] or maybe even higher if there is extra extensible meta data (rare?).
The extra data itself (if present), begins in byte 36.
So a simple C# program to detect the header length would look like this:
static void Main(string[] args)
{
byte[] bytes = new byte[4];
FileStream fileStream = new FileStream(args[0], FileMode.Open, FileAccess.Read);
fileStream.Seek(16, 0);
fileStream.Read(bytes, 0, 4);
fileStream.Close();
int Subchunk1Size = BitConverter.ToInt32(bytes, 0);
if (Subchunk1Size < 16)
Console.WriteLine("This is not a valid wav file");
else
switch (Subchunk1Size)
{
case 16:
Console.WriteLine("44-byte header");
break;
case 18:
Console.WriteLine("46-byte header");
break;
default:
Console.WriteLine("Header contains extra data and is larger than 46 bytes");
break;
}
}
In addition to Radiodef's excellent reply, I'd like to add 3 things that aren't obvious.
The only rule for WAV files is the FMT chunk comes before the DATA chunk. Apart from that, you will find chunks you don't know about at the beginning, before the DATA chunk and after it. You must read the header for each chunk to skip forward to find the next chunk.
The FMT chunk is commonly found in 16 byte and 18 byte variations, but the spec actually allows more than 18 bytes as well.
If the FMT chunk' header size field says greater than 16, Bytes 17 and 18 also specify how many extra bytes there are, so if they are both zero, you end up with an 18 byte FMT chunk identical to the 16 byte one.
It is safe to read in just the first 16 bytes of the FMT chunk and parse those, ignoring any more.
Why does this matter? - not much any more, but Windows XP's Media Player was able to play 16 bit WAV files, but 24 bit WAV files only if the FMT chunk was the Extended (18+ byte) version. There used to be a lot of complaints that "Windows doesn't play my 24 bit WAV files", but if it had an 18 byte FMT chunk, it would... Microsoft fixed that sometime during the early days of Windows 7, so 24 bit with 16 byte FMT files work fine now.
(Newly added) Chunk sizes with odd sizes occur quite often. Mostly seen when a 24 bit mono file is made. It is unclear from the spec, but the chunk size specifies the actual data length (the odd value) and a pad byte (zero) is added after the chunk and before the start of the next chunk. So chunks always start on even boundaries, but the chunk size itself is stored as the actual odd value.

LibAV - what approach to take for realtime audio and video capture?

I'm using libav to encode raw RGB24 frames to h264 and muxing it to flv. This works
all fine and I've streamed for more then 48 hours w/o any problems! My next step
is to add audio to the stream. I'll be capturing live audio and I want to encode it
in real time using speex, mp3 or nelly moser.
Background info
I'm new to digital audio and therefore I might be doing things wrong. But basically my application gets a "float" buffer with interleaved audio. This "audioIn" function gets called by the application framework I'm using. The buffer contains 256 samples per channel,
and I have 2 channels. Because I might be mixing terminology, this is how I use the
data:
// input = array with audio samples
// bufferSize = 256
// nChannels = 2
void audioIn(float * input, int bufferSize, int nChannels) {
// convert from float to S16
short* buf = new signed short[bufferSize * 2];
for(int i = 0; i < bufferSize; ++i) { // loop over all samples
int dx = i * 2;
buf[dx + 0] = (float)input[dx + 0] * numeric_limits<short>::max(); // convert frame of the first channel
buf[dx + 1] = (float)input[dx + 1] * numeric_limits<short>::max(); // convert frame of the second channel
}
// add this to the libav wrapper.
av.addAudioFrame((unsigned char*)buf, bufferSize, nChannels);
delete[] buf;
}
Now that I have a buffer, where each sample is 16 bits, I pass this short* buffer, to my
wrapper av.addAudioFrame() function. In this function I create a buffer, before I encode
the audio. From what I read, the AVCodecContext of the audio encoder sets the frame_size. This frame_size must match the number of samples in the buffer when calling avcodec_encode_audio2(). Why I think this, is because of what is documented here.
Then, especially the line:
If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last.*(Please correct me here if I'm wrong about this).
After encoding I call av_interleaved_write_frame() to actually write the frame.
When I use mp3 as codec my application runs for about 1-2 minutes and then my server, which is receiving the video/audio stream (flv, tcp), disconnects with a message "Frame too large: 14485504". This message is generated because the rtmp-server is getting a frame which is way to big. And this is probably due to the fact I'm not interleaving correctly with libav.
Questions:
There quite some bits I'm not sure of, even when going through the source code of libav and therefore I hope if someone has an working example of encoding audio which comes from a buffer which which comes from "outside" libav (i.e. your own application). i.e. how do you create a buffer which is large enough for the encoder? How do you make the "realtime" streaming work when you need to wait on this buffer to fill up?
As I wrote above I need to keep track of a buffer before I can encode. Does someone else has some code which does this? I'm using AVAudioFifo now. The functions which encodes the audio and fills/read the buffer is here too: https://gist.github.com/62f717bbaa69ac7196be
I compiled with --enable-debug=3 and disable optimizations, but I'm not seeing any
debug information. How can I make libav more verbose?
Thanks!

Resources