Convert video from Hi8-Lp format - audio

Some video, was recorded by camera (Hi8-Lp format). Then it was decoded to mpeg2video codec. I have this decoded video. But decoded video have not correct video and audio speed (like fast playback) and have longitudinal lines on video (you can see sample).
sample video
How to convert video with correct speed?
Thx for help.

You can use this application "DVDSanta" ..
http://www.topvideopro.com/burn-dvd/8mm-to-dvd.htm
I hope this answer help you...

Use WinDV if You are on windows, then convert it with ffmpeg (or StaxRip, MeGUI, Handbrake) to Your preferable format.
On Mac You could use iMovie
On Linux You could use xawtv (didn't tried this one)
Do not encode video when transferring from camera, do it afterwards

Related

How to read audio and video packets from mp4 file

I am trying to write a code in c/c++ (objective c) to parse the audio and video data from mp4 file.
I know that data in mp4 file contains under the mp4 atom but not sure how i can parse out the audio and video data separately.
Thanks in advance for any help.
Mp4 format is fairly complicated. I suggest you use a library. But if you can't use a library, or just wan to learn the format, Than you must parse about a dozen boxes or atoms under the root moov box. The information from there can be used to find frames within the mdat atom. The full specifications is numbered ISO/IEC 14496-12 You should be able to find a copy online.

ffmpeg - Can I draw an audio channel as an image?

I'm wondering if it's possible to draw an audio channel of a video or audio file as an image using ffmpeg, or if there's another tool that would do it on Win2k8 x64. I'm doing this as part of an encoding process after a user uploads a video or audio file.
I'm using ColdFusion 10 to handle the upload and calling cfexecute to run ffmpeg.
I need the image to look something like this (without the horizontal lines):
You can do this programmatically very easily.
Study the basics of FFmpeg. I suggest you to compile this sample. It explains how to open a video/audio, identify the streams and loop over the packets.
Once you have the data packet (in this case you are interested only in the audio packets). You will decode it (line 87 of this document) and obtain the raw data of an audio. It's the waveform itself (the analogue "bitmap" for an audio).
You could also study this sample. This second example is how to write a video/audio file. You don't want to write any video, but with this sample you can easily understand how the audio raw data packet works, if you see the functions get_audio_frame() and write_audio_frame().
You need to have some knowledge about creating a bitmap. Any platform has an easy way to do that.
So, the answer for you: YES, IT IS POSSIBLE TO DO THIS WITH FFMPEG! But you have to code a little bit in order to get what you want...
UPDATE:
Sorry, there are ALSO built-in features for this:
You could use those filters... or
showspectrum, showwaves, avectorscope
Here are some examples on how to use it: FFmpeg Filters - 12.22 showwaves.

FFMPEG and MP3. How decode

How to decode mp3 audio using ffmpeg (using the API)? If not complicated - example code?
PS I went to open the file, find the audio channels. then I do not understand what to do ...
It's nice to go all the way through the tutorials but this part of the tutorial deals with decoding audio (although I am currently having a problem with it as avcodec_decode_audio3() is the updated version of avcodec_decode_audio2()).
Hope it helps,
Infinitifizz

Decode G711(PCM u-law)

Please bear with me as my understanding of audio codec is limited.
I have this audio source from a IPCAM (through a htto//... CGI interface).
I am trying to write several client programs to play this audio source on Windows, MAC, as well as Android phone. The audio is encoded in G711 (PCM ulaw).
Do I need to decode the PCM audio data to a raw audio data before I could pass it to the audio engine to play? If so, is there some sample code on how to decode it?
I am confused as somehow I believe PCM is already RAW. Could I just feed it directly to the audio engine on Android for example?
thanks much in advance
It depends on what API you are using to play sound, but most require linear PCM and you have µ-law PCM, so unless your API supports µ-law playback you will need to convert the µ-law sample values to linear.
With G.711 the compressed µ-law samples are 8 bits and these will be converted to 14 bit linear values which you will store in a buffer as 2 bytes per sample. There is a brief description of the µ-law encoding on the G.711 Wikipedia page.
You may find this useful:
u-Law companding algorithm in C

Saving V4L2 video camera output

What video format would be the easiest when saving the output of a camera using V4L2 if I capture it in bitmap format? Getting mpeg directly could be, of course, nice, but I can't unfortunately count on that.
I have managed to capture the frames, now I need to somehow view the video. Can I simply convert those frames using some Linux tool or could I save the video easily straight from my app?
To keep things simple (as in a Proof-of-Concept demo), you can go ahead and directly store the YUV frames captured from the device into a file.
There are a bunch of viewers that support playback of single/multiple frame(s) of YUV data from a file.
One such YUV viewer is freecode.com/projects/yay
You could use practically any format/codec if you used mencoder or ffmpeg
Btw, this question really should be on superuser.com
If you are capturing frames already, you could save them to PPM images and then go to JPEG. I did this using v4l2 and ImageMagick. Maybe you could push JPEGs into a Motion JPEG stream. It might not be as high tech as MPEG, but you might get it working quickly. PPM files were a cinch to create. If I remember correctly, the v4l2 example code shows you how to do that part.

Resources