i did cat /proc/meminfo then i find
000a0000-000bffff : Video RAM area
here the size is 20000 how is this calculated?
The calculation is
000bffff
-
000a0000
--------
1ffff
which is 131071 in decimal (and one less than 20000 hex).
Did you mean 20000 to be a hex number?
Related
I have a bunch of audio files and need to split each files based on silence and using SOX. However, I realize that some files have very noisy background and some don't thus I can't use a single set of parameter to iterate over all files doing the split. I try to figure out how to separate them by noisy background. Here is what I got from sox input1.flac -n stat and sox input2.flac -n stat
Samples read: 18207744
Length (seconds): 568.992000
Scaled by: 2147483647.0
Maximum amplitude: 0.999969
Minimum amplitude: -1.000000
Midline amplitude: -0.000015
Mean norm: 0.031888
Mean amplitude: -0.000361
RMS amplitude: 0.053763
Maximum delta: 0.858917
Minimum delta: 0.000000
Mean delta: 0.018609
RMS delta: 0.039249
Rough frequency: 1859
Volume adjustment: 1.000
and
Samples read: 198976896
Length (seconds): 6218.028000
Scaled by: 2147483647.0
Maximum amplitude: 0.999969
Minimum amplitude: -1.000000
Midline amplitude: -0.000015
Mean norm: 0.156168
Mean amplitude: -0.000010
RMS amplitude: 0.211787
Maximum delta: 1.999969
Minimum delta: 0.000000
Mean delta: 0.091605
RMS delta: 0.123462
Rough frequency: 1484
Volume adjustment: 1.000
The former does not contain noisy background and the latter does. I suspect I can use the Sample Mean of Max delta because of the big gap.
Can anyone explain for me the meaning of those stats, or at least show me where I can get it myself (I tried looking up in official documentation but they don't explain). Many thanks.
I don't know how I've managed to miss stat in the SoX docs all this time, it's right there.
Length
length of the audio file in seconds
Scaled by
what the input is scaled by. By default 2^31-1, to go from 32-bit signed integer to [-1, 1]
Maximum amplitude
maximum sample value
Minimum amplitude
minimum sample value
Midline amplitude
aka mid-range, midpoint between the max and minimum values.
Mean norm
arithmetic mean of samples' absolute values
Mean amplitude
arithmetic mean of samples' values
RMS amplitude
root mean square, root of squared values' mean
Maximum delta
maximum difference between two successive samples
Minimum delta
minimum difference between two successive samples
Mean delta
arithmetic mean of differences between successive samples
RMS delta
root mean square of differences between successive samples
Rough frequency
estimation of the input file's frequency, in hertz. unsure of method used
Volume adjustment
value that should be sent to -v so peak absolute amplitude is 1
Personally I'd rather use the stats function, whose output I find much more practically useful.
As a measure to differentiate between the more or less noisy audio I'd try using the difference between the highest and lowest sound levels. The quietest parts will never be quieter than the background noise alone, so if there is little difference the audio is either noisy, or just loud all the time, like a compressed pop song. You could take the difference between the maximum and minimum RMS values, or between peak and minimum RMS. The RMS window length should be kept fairly short, say between 10 and 200ms, and if the audio has fade-in or fade-out sections, those should be trimmed away, though I didn't include that in the code.
audio="input1.flac"
width=0.01
# Mixes down multi-channel files to mono
stats=$(sox "$audio" -n channels 1 stats -w $width 2>&1 |\
grep "Pk lev dB\|RMS Pk dB\|RMS Tr dB" |\
sed 's/[^0-9.-]*//g')
peak=$(head -n 1 <<< "$stats")
rmsmax=$(head -n 2 <<< "$stats" | tail -n 1)
rmsmin=$(tail -n 1 <<< "$stats")
rmsdif=$(bc <<< "scale=3; $rmsmax - $rmsmin")
pkmindif=$(bc <<< "scale=3; $peak - $rmsmin")
echo "
max RMS: $rmsmax
min RMS: $rmsmin
diff RMS: $rmsdif
peak-min: $pkmindif
"
The documentation is found in sox.pdf in the install directory.
For example, if you install the Windows 32-bit version of SoX 14.4.2, the PDF is found at C:\Program Files (x86)\sox-14-4-2\sox.pdf and the documentation for stat is on pages 35 - 36.
I also found a webpage version here.
I'd use the "mean norm" value as a decider. It works for me, especially if you get pops or clicks on the line. If the line is clean however, then Maximum Amplitude might be a better value to use (I notice your Maximum Amplitude is the same on both, so therefore do not use this in your case).
I want to find out what block number the word "word" is using. I know blocks start at 0, so I thought that adding 2114+1 would be my answer, but it's not...
user#host:~$ strings -td dump.dd|grep "word"
2114 __strtsuper your word is stored here
I know -td brings me back the offset in decimal, but how can I calculate the block number? What do I need to do with the 2114 number?
strings offsets are in bytes.
A disk block is composed by a group of bits, commonly 512 (and on bigger disks 4096).
So, you must know the block size of your disk, convert to bytes (1 byte = 8 bits), and divide the strings offset by that number.
I'm looking at the file /proc/net/dev and wondering about unit conversion of the receive bytes value.
Here's the part of the file I'm contemplating:
Inter-| Receive
face |bytes
eth0: 7060880392
ifconfig uses /proc/net/dev to produce the following:
eth0 Link encap:Ethernet
...
RX bytes:7060880392 (7.0 GB)
That's what I don't understand. Given that the unit of the value is in bytes (rather than bits), I would have expected to convert to GB through divisions of 1024. 7060880392/1024/1024/1024 = 6.6GB. But clearly ifconfig has used divisions of 1000 to convert from B to GB.
Can someone explain why they did this? I know bandwidth is generally expressed in bits; perhaps the labeling in /proc/net/dev is incorrect in referring to the unit of the value as bytes? I checked the manpage for proc, but there's not a lot of detail on this file.
The term GB represents 10 base, while GiB represents 2 base (1024). Read more on wikipedia: Binary prefix.
I'd make an educated guess that the implementer chose to use GB instead of GiB because the relevant info is how many bytes were sent/received, rather than their division to fit the "computerized" calculation.
I am aware of possible duplicate questions already in stackoverflow. But those questions do not address my question directly. My understanding is that 32 bit machine uses 32 bit to store memory addresses, therefore, the maximum memory it can have is 2^32 bit. However, 2^32 bit = 2^29 byte = 2^29/10^9 = 0.5 Gigabyte.
I know that the answer should be 4 gigabyte. But I simply cannot figure out where is my mistake. HELP!
I believe that the 2^32 refers to the number of addressable bytes not the total number of bits in memory. you can address 4 billion bytes ( 32 billion bits) or 4 gigs of memory. For instance
Address 0 | Address 1 |... | Address 2^32
........................................
8 bits | 8 bits | ...| 8 bits
EDIT:
The 32 bit machine usually refers to the number of bits you can stuff into the CPU's registers (not RAM). Thus 1 register allows for 32 bits which can address 2^32 bytes of RAM.
EDIT:
Here is a good explanation on superuser:
https://superuser.com/questions/56540/32-bit-vs-64-bit-systems
I am wondering on the relationship between a block of samples and its time equivalent. Given my rough idea so far:
Number of samples played per second = total filesize / duration.
So say, I have a 1.02MB file and a duration of 12 sec (avg), I will have about 89,300 samples played per second. Is this right?
Is there other ways on how to compute this? For example, how can I know how much a byte[1024] array is equivalent to in time?
Generally speaking for PCM samples you can divide the total length (in bytes) by the duration (in seconds) to get the number of bytes per second (for WAV files there will be some inaccuracy to account for the header). How these translate into samples depends on
the sample rate
bits used per sample, i.e. commonly
used is 16 bits = 2 bytes
number of channels, i.e. for stereo
this is 2
If you know 2) and 3) you can determine 1)
In your example 89300 bytes/second, assuming stereo and 16 bits per sample would be 89300 / 4 ~= 22Khz sample rate
In addition to #BrokenGlass's very good answer, I'll just add that for uncompressed audio with a fixed sample rate, number of channels and bits per sample, the arithmetic is fairly straightforward. E.g. for "CD quality" audio we have a 44.1 kHz sample rate, 16 bits per sample, 2 channels (stereo), therefore the data rate is:
44100 * 16 * 2
= 1,411,200 bits / sec
= 176,400 bytes / sec
= 10 MB / minute (approx)