How to convert raw YUV image to jpg - linux

I have a raw image that was taken with v4l2-ctl after the camera had been setup like:
# media-ctl -d /dev/media0 -l "'rzg2l_csi2 10830400.csi2':1 -> 'CRU output':0 [1]"
# media-ctl -d /dev/media0 -V "'rzg2l_csi2 10830400.csi2':1 [fmt:UYVY8_2X8/1280x960 field:none]"
# media-ctl -d /dev/media0 -V "'ov5645 0-003c':0 [fmt:UYVY8_2X8/1280x960 field:none]"
and then the picture got snapped with:
# v4l2-ctl --device /dev/video0 --stream-mmap --stream-to=frame.raw --stream-count=1
now I've tried multiple methods to convert this into a jpeg but nothing seems to yield the expected output
the raw file can be downloaded here: https://drive.google.com/file/d/1VqXnrJDYbzdtSsWfTlm2mX9rl1-Rl_7F/view?usp=sharing
I tried out the following command:
convert -verbose -size 1280x960 UYVY:frame.raw frame.bmp
which I found on Converting from YUV(UYVY) to RGB using imagemagick
but it doesn't do the trick

Your frame is 2457600 bytes and your pixel dimensions are 1280x960, so you have:
bits per pixel = 2457600 * 8 / (1280 * 960) = 16
You can get a list of the pixel formats that ffmpeg supports using:
ffmpeg -pix_fmts 2> /dev/null
Sample Output
FLAGS NAME NB_COMPONENTS BITS_PER_PIXEL
-----
IO... yuv420p 3 12
IO... yuyv422 3 16
IO... rgb24 3 24
IO... bgr24 3 24
IO... yuv422p 3 16
IO... yuv444p 3 24
IO... yuv410p 3 9
...
...
That means you can get a list of pixel formats that contain Y, U and V with 16 bits per pixel like this:
ffmpeg -pix_fmts 2> /dev/null | awk '/y/ && /u/ && /16$/ {print}'
IO... yuyv422 3 16
IO... yuv422p 3 16
IO... yuvj422p 3 16
IO... uyvy422 3 16
IO... yuv440p 3 16
IO... yuvj440p 3 16
IO... yvyu422 3 16
Now you can run a loop, iterating over all the 16-bit per pixel YUV formats and see what ffmpeg makes of your image - naming each result after the format so you can identify which is which:
ffmpeg -pix_fmts 2> /dev/null |
awk '/y/ && /u/ && /16$/ {print $2}' |
while read f; do
ffmpeg -y -s:v 1280x960 -pix_fmt $f -i frame.raw $f.jpg
done
That gives you these files:
-rw-r--r-- 1 mark staff 304916 3 Feb 09:38 yuv440p.jpg
-rw-r--r-- 1 mark staff 227123 3 Feb 09:38 yuvj422p.jpg
-rw-r--r-- 1 mark staff 39543 3 Feb 09:38 yuyv422.jpg
-rw-r--r-- 1 mark staff 39545 3 Feb 09:38 yvyu422.jpg
And I guess that yuyv422.jpg is your image, so that means you can extract it with:
ffmpeg -y -s:v 1280x960 -pix_fmt yuyv422 -i frame.raw result.jpg
If you wanted to do that with ImageMagick, you could do something like this:
#!/bin/bash
python3 <<EOF
import numpy as np
h, w = 960, 1280
# Load raw file into Numpy array
raw = np.fromfile('frame.raw', np.uint8)
raw[0::2].tofile('Y') # Starting at the 1st byte, write every 2nd byte to file "Y"
raw[1::4].tofile('U') # Starting at the 2nd byte, write every 4th byte to file "U"
raw[3::4].tofile('V') # Starting at the 3rd byte, write every 4th byte to file "V"
EOF
# Load the Y channel, then the U and V channels forcibly resizing them, then combine and go to sRGB
magick -depth 8 -size 1280x960 gray:Y \
\( -size 640x960 gray:U gray:V -resize 1280x960\! \) \
-set colorspace YUV -combine -colorspace sRGB result.jpg
If yo don't like/have Python, that part can be replaced with some basic C as follows:
#include <stdint.h>
#include <stdio.h>
// Split YUYV file called "frame.raw" into separate channels with filenames "Y", "U" and "V"
// Compile with: clang -O3 splitter.c -o splitter
int main(){
FILE *in, *Y, *U, *V;
uint8_t buffer[4];
size_t bytesRead;
// Open input file and 1 output file per channel
in = fopen("frame.raw", "rb");
Y = fopen("Y", "wb");
U = fopen("U", "wb");
V = fopen("V", "wb");
// read up to sizeof(buffer) bytes
while ((bytesRead = fread(buffer, 1, sizeof(buffer), in)) > 0)
{
fputc(buffer[0], Y);
fputc(buffer[1], U);
fputc(buffer[2], Y);
fputc(buffer[3], V);
}
}
Having had so much fun doing ffmpeg, Python, and C versions, I thought I'd try just doing it in the shell - converting bytes to lines and so I could pick alternate lines instead of alternate bytes. This works the same as the above:
#!/bin/bash
# Build JPEG image from YUYV image with packed bytes in order YUYVYUYV...
# Use "xxd" to convert bytes into lines, then extract alternate lines - which is easier than extracting bytes
H=960
W=1280
INPUT="frame.raw"
# Take top byte of every uint16 and put into "Y.pgm"
xxd -c1 -p "$INPUT" | sed -n 'p;n' | xxd -r -p | magick -size ${W}x${H} -depth 8 gray:- Y.pgm
# Take bottom byte of every 2nd uint16, starting at the 1st, resize up to full width and put into "U.pgm"
xxd -c1 -p "$INPUT" | sed -n 'n;p' | sed -n 'p;n' | xxd -r -p | magick -size $((W/2))x${H} -depth 8 gray:- -resize ${W}x${H}\! U.pgm
# Take bottom byte of every 2nd uint16, starting at the 2nd, resize up to full width and put into "V.pgm"
xxd -c1 -p "$INPUT" | sed -n 'n;p' | sed -n 'n;p' | xxd -r -p | magick -size $((W/2))x${H} -depth 8 gray:- -resize ${W}x${H}\! V.pgm
# Load the 3 channels, combine and convert to JPEG
magick {Y,U,V}.pgm -set colorspace YUV -combine -colorspace sRGB result.jpg
# Remove litter
rm {Y,U,V}.pgm
As regards colour cast removal, as I said in the comments, the " normal" way, AFAIK, is to get the average colour of the image and invert its Hue then blend that "negated cast" back with the original image to offset the original colour cast. Here is a crude attempt - if anyone knows better please ping me!
Step 1: Get average colour cast
magick result.jpg -resize 1x1\! cast.png
Step 2: Invert the cast
magick cast.png -modulate 100,100,0 correction.png
Step 3: Blend the original with the correction and brighten maybe
magick result.jpg correction.png -define compose:args=50,50 -compose blend -composite -auto-level result.jpg
Here are the original and corrected versions:
Obviously you can change the percentages for different degrees of "correction".

Related

How supress "jpegtopnm: WRITING PPM FILE" etc within output of jpegtopnm[Solved]

I want to see the sizes of images within a directory. For this purpose I do
$ for file in *.jpg; do jpegtopnm $file | pnmfile; done
Then I can see
jpegtopnm: WRITING PPM FILE
stdin: PPM raw, 960 by 1280 maxval 255
jpegtopnm: WRITING PPM FILE
stdin: PPM raw, 960 by 1280 maxval 255
jpegtopnm: WRITING PPM FILE
stdin: PPM raw, 1200 by 1600 maxval 255
and so on.
I would like to see
960 by 1280
960 by 1280
1200 by 1600
.............
How one can do this?
Answer
The command jpegtopnm is a part of netpbm - package of graphics manipulation programs and libraries:
$ apt-file -l find pnmfile
netbpm
Then we must read "man netbpm":
-quiet Suppress all informational messages
Thus we solved our problem:
$ for file in *.jpg; do jpegtopnm $file -quiet | pnmfile | cut -c 16-28; done
4000 by 3000
2592 by 1944
4000 by 3000
............
About "cut -c 16-28".
This is a filter that selects characters from 16 to 28 in a string
"stdin: PPM raw, 960 by 1280 maxval 255".
If you have at your directory images with different sizes such as 4000x5000, 300x400, 2x3, 40x67 etc it won't work properly. For that reason you have to use more complicated way. It is a "cut" filter by fields(-f). The field separator will be a space character(-d ' ').
$ for file in *.jpg; do jpegtopnm $file -quiet | pnmfile | cut -d ' ' -f 3-5; done
700 by 900
65 by 40
2 by 3
7000 by 9000
4000 by 3000
............

FFMpeg merge video and audio at specific time into another video

I have a standard mp4 (audio + video)
I am trying to merge a 1.4 second mini mp4 clip into this track, replacing the video for the length of the mini clip but merging the audios together at a specific time
Would anyone know how to do this using ffmpeg?
I've tried quite a few different filters, however can't seem to get what I want
V <------->
miniclip.mp4 A <=======>
V <-----------> ↓ + ↓ <--->
standard.mp4 A <=========================>
Example to show miniclip.mp4 (1.4 seconds long) at timestamp 5.
ffmpeg -i main.mp4 -i miniclip.mp4 -filter_complex "[0:v]drawbox=t=fill:enable='between(t,5,6.4)'[bg];[1:v]setpts=PTS+5/TB[fg];[bg][fg]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass;[1:a]adelay=5s:all=1[a1];[0:a][a1]amix" output.mp4
drawbox covers the main video with black. Only needed if miniclip.mp4 has a smaller width or height than main.mp4. You can omit it if miniclip.mp4 width and height is ≥ than main.mp4. Alternatively you could use the scale2ref filter to make miniclip.mp4 have the same width and height as main.mp4.
setpts add a 5 second offset to miniclip.mp4 video.
overlay overlays miniclip.mp4 video over main.mp4 video.
adelay adds a 5 second delay to miniclip.mp4 audio.
amix mixes miniclip.mp4 and main.mp4 audio.
More info
See FFmpeg Filter Documentation for info on each filter.
How to get video duration
Edited (now I understood the question):
First Get 1.4 seconds of standard.mp4 and audio1.mp3
-ss is the start for get the small video that will be 1.4 seconds of length (with -t option you can specify the duration, in this case 1.4 seconds) summary: cut video from min 5, 1.4 seconds
-an is for audio none copy, because you want to add a new audio1.mp3
video_only.mp4
ffmpeg -ss 00:05:00 -i standard.mp4 -t 1.4 -map 0:v -c copy -an small_only_video.mp4
audio_only.mp4
ffmpeg -ss 00:05:00 -i audio1.mp3 -t 1.4 -c copy small_only_audio.mp3
now you can to create a small_clip_audiovideo.mp4
ffmpeg -i small_only_video.mp4 -c:a mp3 -i small_only_audio.mp3 -c copy -map 0:v -map 1:a:0 -disposition:a:0 default -disposition:a:1 default -strict -2 -sn -dn -map_metadata -1 -map_chapters -1 -movflags faststart small_clip_audiovideo.mp4
V <------->
miniclip.mp4 A <=======>
V <-----------> ↓ + ↓ <------->
standard.mp4 A <=============================>
|--|--|--|--|--|--|--|--|--|--|
0 1 2 3 4 5 6 7 8 9 10
standard.mp4 have 10 seconds (aprox) of duration, have audio and video
miniclip.mp4 have 03 seconds (aprox) of duration, have video and audio
ffmpeg -i standard.mp4 |
} have same codes of video and audio?*
ffmpeg -i miniclip.mp4 |
if are not a same audio code or video code of files standard.mp4 and miniclip.mp4, you will be to recode for continue, if you want a good work.
ffmpeg -ss 00:00:00 -i standard.mp4 -t 4 -c copy 01.part_project.mp4
and 7 to 10, in 03.part_project.mp4
ffmpeg -ss 00:00:04.000 -i standard.mp4 -t 3.0000 -c copy 03.part_project.mp4
changue name or create a copy of miniclip.mp4 to 02.part_project.mp4
cp miniclip.mp4 02.part_project.mp4
(the part of 4 second to 7 seconds, of standard.mp4 will be used if you choice the OPTION 2 copy only the audio, santadard_part2_audio.mp4)
NOW THE OPTION N 1: IS TO CONTACT (UNITED) the 3 video parts
make a folder "option1" and copy 01.part_project.mp4 02.part_project.mp4 03.part_project.mp4
mkdir option1 && cp 01.part_project.mp4 02.part_project.mp4 03.part_project.mp4 ./option1 && cd ./option1
now you concat 01.part_project.mp4 + 02.part_project.mp4 + 03.part_project.mp4 into a unique file fin_option1.mp4
ffmpeg -f concat -safe 0 -i <(for f in ./*.mp4; do echo "file '$PWD/$f'"; done) -c copy fin_option1.mp4
V <------->
miniclip.mp4 A <=======>
V <-----------> ↓ + ↓ <------->
standard.mp4 A <============XXXXXXXXX========>
|--|--|--|--|--|--|--|--|--|--|
0 1 2 3 4 5 6 7 8 9 10
THE SECOND OPTION IS TO CONTACT (UNITED) the 3 video parts, BUT MIX
THE AUDIO OF miniclip.mp4 with santadard_part2_audio.mp4
get the audio stream from santadard_part2_audio.mp4 and get the audio
file only from miniclip.mp4
ffmpeg -i santadard_part2_audio.mp4 -map 0:a -c copy -vn -strict -2 mix_audio_santadad.mp4
ffmpeg -i miniclip.mp4 -map 0:a -c copy -vn -strict -2 mix_audio_miniclip.mp4
MIX ALL AUDIOS** IN ONE AND PUT THE VIDEO FROM miniclip.mp4
ffmpeg -i mix_audio_miniclip.mp4 -i mix_audio_santadad.mp4 -filter_complex amix=inputs=2:duration=longest -strict -2 audio_mixed_miniclip.mp4
get only video from miniclip.mp4
ffmpeg -i miniclip.mp4 -c copy -an miniclip_video.mp4
and get miniclip but with mixed audios, I think that it is the solution that you are looking for
ffmpeg -i miniclip_video.mp4 -i audio_mixed_miniclip.mp4 -c copy -map 0:v -map 1:a:0 -disposition:a:0 default -disposition:a:1 default -strict -2 -sn -dn -map_metadata -1 -map_chapters -1 -movflags faststart 02.part_project_OPTION2.mp4
santadard_part2_audio.mp4
+
audio_miniclip.mp4
V <------->
miniclip.mp4 A <MMMMMMMM> (audio miniclip mixed with standard.mp4)
V <-----------> ↓ + ↓ <------->
standard.mp4 A <============ ========>
|--|--|--|--|--|--|--|--|--|--|
0 1 2 3 4 5 6 7 8 9 10
make a folder "option2" and copy 01.part_project.mp4 02.part_project_OPTION2.mp4 03.part_project.mp4
mkdir option2 && cp 01.part_project.mp4 02.part_project_OPTION2.mp4 03.part_project.mp4 ./option2 && cd ./option2
ffmpeg -f concat -safe 0 -i <(for f in ./*.mp4; do echo "file '$PWD/$f'"; done) -c copy fin_option2.mp4
NOTES
** YOU CAN USE A LOT OF AUDIO MANIPULATIONS https://trac.ffmpeg.org/wiki/AudioChannelManipulation

Script is not working. What am I doing wrong?

First off I found this guide, which details exactly what I need.
https://imagemagick.org/script/connected-components.php
For the life of me I cannot get this to work. Anyone have any idea?
I've tried a bunch of variations of the scripts listed in the guide.
Also when I run convert /var/www/mailtovoice/audrey/sean_look_grey.png -define connected-components:verbose=true -connected-components 8 /var/www/mailtovoice/audrey/sean_look4.png
I get 1000s of objects. When I converted it to an image with just 3 objects I get 100s.
Mark has the right idea, but the solution is much simpler than he posted, since ImageMagick -connected-components can do the filtering and output directly.
input:
Unix line endings (for windows use ^ rather than \ )
convert image.png \
-define connected-components:area-threshold=100 \
-define connected-components:mean-color=true \
-connected-components 4 \
result.png
The method suggested by Fred (#fmw42) is far simpler and preferable to that shown in this answer, so all but die-hard enthusiasts should use Fred's answer. Rather than delete mine, I will leave it showing as it could form the basis for other more demanding/involved processing.
This is a rather funny way to do it... find all the blobs. i.e. connected components:
convert spotty.png -define connected-components:verbose=true -connected-components 4 null:
which gives you something like this but with 2,000+ lines:
Objects (id: bounding-box centroid area mean-color):
0: 860x482+0+0 431.5,239.7 405738 gray(0)
800: 43x263+252+219 265.9,350.5 2458 gray(255)
2: 21x226+276+0 288.9,111.2 1540 gray(255)
2216: 5x16+107+445 109.3,452.9 65 gray(255)
910: 7x15+276+228 279.0,234.5 63 gray(255)
491: 7x14+651+150 654.1,156.6 54 gray(255)
1207: 7x9+735+282 737.9,285.8 53 gray(255)
2313: 6x9+147+457 149.6,460.9 48 gray(255)
985: 8x9+754+238 757.3,242.0 48 gray(255)
...
...
Now look for all the ones with a size (second-to-last field) less than 1000 using awk and print the region:
convert spotty.png \
-define connected-components:verbose=true \
-connected-components 4 null: |
awk -v thresh=1000 'NR>1 && $(NF-1)<thresh{print " -region " $2 " -colorize 100%"}'
Output
-region 5x16+107+445 -colorize 100%
-region 7x15+276+228 -colorize 100%
-region 7x14+651+150 -colorize 100%
-region 7x9+735+282 -colorize 100%
...
...
Now reload the original image, set the fill colour for colorised regions to red and regenerate the list of regions to be filled exactly as above:
convert spotty.png -fill red $(convert spotty.png -define connected-components:verbose=true -connected-components 4 null: | awk -v thresh=1000 'NR>1 && $(NF-1)<thresh{print " -region " $2 " -colorize 100%"}' ) result.png
The command generated boils down to:
convert spotty.png -threshold 50% -fill red \
-region 56x16+107+445 -colorize 100% \
-region 70x15+276+228 -colorize 100% \
-region ... -colorize 100% \
...
...
result.png

How to Convert a 24bit WAV file to 32bit while keeping Audio Format PCM = 1 (linear quantization)

Refer Here for more context to my question: https://gamedev.stackexchange.com/questions/136817/how-to-get-sdl2-to-play-32bit-wav-files
I have a 24bit WAV file that has an Audio Format PCM of 1, refer here: http://soundfile.sapp.org/doc/WaveFormat/ to AudioFormat
When converting my WAV file (24bit) to 16 bit using: ffmpeg -i input.wav -ar 48000 -ac 2 -acodec pcm_s16le output.wav it retains the Audio Format PCM = 0x001.
When using ffmpeg -i input.wav -ar 48000 -ac 2 -acodec pcm_s32le output.wav the Audio Format PCM = 0xfffe.
SDL2 (as seen in the parent question) only allows files to play with Linear PCM Audio Format (1), and I am unsure how using sox or ffmpeg how to convert my 24bit WAV files upwards to 32bit (as SDL2 only plays 32bit and 16bit).
Is what I'm asking possible? Some more information on WAV files and why ffmpeg changes the header number would be greatly appreciated.
FFmpeg uses the following code to set the codec tag
...
waveformatextensible = (par->channels > 2 && par->channel_layout) ||
par->channels == 1 && par->channel_layout && par->channel_layout != AV_CH_LAYOUT_MONO ||
par->channels == 2 && par->channel_layout && par->channel_layout != AV_CH_LAYOUT_STEREO ||
par->sample_rate > 48000 ||
par->codec_id == AV_CODEC_ID_EAC3 ||
av_get_bits_per_sample(par->codec_id) > 16;
if (waveformatextensible)
avio_wl16(pb, 0xfffe);
...
A crude attempt would be to just replace the 2 bytes at 0x20 with 01 00 and try. If that doesn't work and this behaviour is out-of-spec then file a bug report.

How do multiple amplitude fades work on ecasound?

I want to fade a track in and out at specific time codes. For example, I would like to take an audio file, and:
Start it at 100% Volume
Fade it to 20% at 2 seconds
Fade it to 100% at 4 seconds
Fade it to 20% at 6 seconds
Fade it to 100% at 8 seconds
Fade it to 20% at 10 seconds
Fade it to 100% at 12 seconds
Fade it to 0 at 14 seconds
I've been testing this with a constant tone generated by ecasound so that I can open the resulting file in Audacity and see the results visually. As far as I can tell, increasing the amplitude is relative, while decreasing it is not. It seems that if I fade the amplitude up, it affects the relative volume of the whole track and not just at the specific time I set the fade, which is where I'm getting lost.
Example commands
# generate the tone
ecasound -i tone,sine,880,20 -o:tone.wav
# Just the test to see that i can fade start it at 100 and fade it to 20.
ecasound -a:1 -i tone.wav -ea:100 -kl2:1,100,20,2,1 -a:all -o:test_1.mp3
# Fade it out and in
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,100,20,2,1 \
-ea:100 -kl2:1,20,100,4,1 \
-a:all -o:test_2.mp3
# Fade it out and in with a peak of 500
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,100,20,2,1 \
-ea:100 -kl2:1,20,500,4,1 \
-a:all -o:test_3.mp3
# Fade it out from 500, out, and then back to 500
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,500,20,2,1 \
-ea:100 -kl2:1,20,500,4,1 \
-a:all -o:test_4.mp3
# Fade it out from 500, out to a low of 10, and then back to 500
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,500,10,2,1 \
-ea:100 -kl2:1,10,500,4,1 \
-a:all -o:test_5.mp3
# Fade it out from 1000, out to a low of 10, and then back to 1000
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,1000,10,2,1 \
-ea:100 -kl2:1,10,1000,4,1 \
-a:all -o:test_6.mp3
# The eventual result I'm looking for
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,500,20,2,1 \
-ea:100 -kl2:1,20,500,4,1 \
-ea:100 -kl2:1,500,20,6,1 \
-ea:100 -kl2:1,20,500,8,1 \
-ea:100 -kl2:1,500,20,10,1 \
-ea:100 -kl2:1,20,500,12,1 \
-ea:100 -kl2:1,500,0,14,4 \
-a:all -o:test_7.mp3
The Results
The best I can tell from these results is that the amplitude of the whole track is relative to the difference between the low and the peak of all the fading effects. I'm not sure if this result is expected, but it's very confusing.
Also, in the last result (second to last in the image), the fades are no longer taking a full second each. In order to figure out why that may be, I took the final fade-to-zero off and the durations were back to normal. This does not seem like expected behavior.
# "Fixing" the fade durations
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,500,20,2,1 \
-ea:100 -kl2:1,20,500,4,1 \
-ea:100 -kl2:1,500,20,6,1 \
-ea:100 -kl2:1,20,500,8,1 \
-ea:100 -kl2:1,500,20,10,1 \
-ea:100 -kl2:1,20,500,12,1 \
-a:all -o:test_8.mp3
As a side note, I've also tried changing the -ea values to the "current" amplitude with every line. It didn't make any difference (no matter what I set -ea to)
I have the very latest installed from git (2.8.1+dev). I had these same issues with 2.7.0, which is why I upgraded and eventually found myself here.
Am I doing this wrong?
-kl2
After a few hours of head scratching, I finally think I have it figured out. The "From" amplitude on every fade needs to be 100. If you are increasing the amplitude, the "To" amplitude is maximum / from * to.
So if you're trying to go from 20 to 100, it's 100 / 20 * 100 or 500. If you're trying to get to 120: 100 / 20 * 120 or 600. I assume this all makes perfect sense to someone, but I was perfectly stumped.
The working example (with a slightly higher bottom range in the middle to demonstrate):
ecasound -a:1 -i tone.wav \
-ea:100 -kl2:1,100,20,2,1 \
-ea:100 -kl2:1,100,500,4,1 \
-ea:100 -kl2:1,100,40,6,1 \
-ea:100 -kl2:1,100,250,8,1 \
-ea:100 -kl2:1,100,20,10,1 \
-ea:100 -kl2:1,100,500,12,1 \
-ea:100 -kl2:1,100,0,14,1 \
-a:all -o:test_7.mp3
And the output:
Keep in mind that these amplitudes are still relative. If you're going from 45% to 90%: 100 / 45 * 90 = 200, and then now if you drop to 20% of the current amplitude, it's actually 18% (.20 * 90), so going back to 100 would be 100 / 18 * 100 = 555.56
-klg
Just as I figured this out, and came here to post, I received a response from the ecasound mailing list. It's not a direct answer to the kl2 issue, but offers an alternative, easier-on-the-brain answer, which is the klg parameter.
-klg:fx-param,low-value,high-value,point_count,pos1,value1,...,posN,valueN
Generic linear envelope. This controller source can be used to map
custom envelopes to chain operator parameters. Number of envelope
points is specified in 'point_count'. Each envelope point consists of
a position and a matching value. Number of pairs must match
'point_count' (i.e. 'N==point_count'). The 'posX' parameters are given
as seconds (from start of the stream). The envelope points are
specified as float values in range '[0,1]'. Before envelope values are
mapped to operator parameters, they are mapped to the target range of
'[low-value,high-value]'. E.g. a value of '0' will set operator
parameter to 'low-value' and a value of '1' will set it to
'high-value'. For the initial segment '[0,pos1]', the envelope will
output value of 'value1' (e.g. 'low-value').
Here's the command to do what I need using klg instead of kl2:
ecasound -a:1 -i:tone.wav -ea:100 \
-klg:1,0,100,14,2,1,3,0.20,4,0.20,5,1,6,1,7,0.40,8,0.40,9,1,10,1,11,0.20,12,0.20,13,1,14,1,15,0 \
-o:test.mp3
The output is exactly the same as the 2nd track on the image.
This resulting command line is definitely a bit harder to read and hence debug, but may actually be easier to generate dynamically. Regardless, I now have 2 working options to resolve this problem.
And finally, here are my notes for how I figured out the coordinates of the klg command. The asterisks are the "points" which are listed in the klg parameter, the numbers at the top are seconds:
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0
1.0 --* *-* *-* *-*
~ \ / \._./ \ / \
0.2 *-* *-* \
0.0 *----------
I hope this helps someone save at least the amount of hair that i've lost scratching my head.

Resources