I am able to upload the AMR file to SIM800C successfully.
When I play the uploaded audio file during the call using the below command :
#if CALL_RECORDED_AUDIO
Serial1.print("AT+CMEDPLAY=1,C:\\REC\\");
// "Command Media PLAY" -> to play an audio if it is a recorded audio
#else
Serial1.print("AT+CMEDPLAY=1,C:\\User\\");
// "Command Media PLAY" -> to play an audio if it is a uploaded audio
#endif
Played audio always has noise, from C:\User\.
However if I record the audio during call and save it. Play the recorded audio during next call then there is no noise. ( By defining CALL_RECORDED_AUDIO in above code snippet)
according to the documentation of the sim800 it is necessary to play a sound wav during the call
Note
. mode 2 and 3 are not supported when playing audio file during call.
. The audio file can not be played duirng incoming call or outgoing call.
. Only support WAV, PCM, AMR and MP3 format.
. Only support WAV format with 8K 16bit during call.
page 201/202 of the sim800 guide
personally i did not suck having no sim800
I think the recording of a call must be in .WAV format
let me know if it works
Related
Update:
I have a video player in browser which plays mp4 videos though websocket. The player only supports mp4 file. When i checked normal mp4 fiels does not play in the player, a mp4 file with a "moovflags faststart " will only play on that player. For a allready stored file , this will work properly.
But In case of an livestream(RTSP), using ffmpeg will only work once the RTSP connection has terminated since the "moovflags faststart " flags will work once a connection has terminated properly.
Hope the above statements makes more sense.
Due to this behavior, am checking if there is any way to get the moovflasg at first or something
I am having RTSP live source and i need it to convert the RTSP to a mp4 file which has moov flags in the begining of the file.
I have checked with openrtsp to take a mp4 dump of the rtsp, but it only adds moov flags and other info on the footer of the mp4(onlky when openrtsp has closes the rtsp stream).
Ffmpeg has " -movflags faststart" to move the footer info to the header of the mp4 container.
Since i am having a RTSP live source, the video data will be comming back to back and there wont be any termination. The above ffmpeg command only works once the rtsp stream has terminated.
Is there any way we can make a mp4 container which contains the mp4 footer info present in the header itself so that i can use it for a live source?
EDIT #1
I have video player which plays mp4 video files , it only support playback of a recorded mp4 file which is createtd using "-movflags faststart" , normal mp4 files does not play in that.
This is the player
https://github.com/sonysuqin/WasmVideoPlayer.
Since i am tryng to stream live video to the player, its not possible to use movflags faststart.
The mp4 header can not be added to the file before it is complete. It’s not possible because of how mp4 files are structured. The header needs to know the frame type, timestamp, size, and file offset of every frame in the file. That can’t be known until the file is complete. You can not stream an mp4 while it is being created. You need to use a protocol such as HLS or DASH to accomplish this.
I try to stream my video file using VLC player. I choose http transfer protocol and MP4 encoder (H.264 + MP3(MP4)). And automatically I get next command line presets:
:sout=#transcode{vcodec=h264,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ffmpeg{mux=flv},dst=:8080/} :sout-all :sout-keep
Streaming is works great but no audio sound.
I launched it on my PC localhost and local networks on Windows, and I have no
results.If I change encoder to H.264 + MP3 TS:
:sout=#transcode{vcodec=h264,vb=800,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ts,dst=:9000/}
If I change transfer protocol to RTSP(or RTP), the sound starts to play with any types of encoders. F.e:
:sout=#transcode{vcodec=h264,scale=auto,acodec=mpga,ab=128,channels=2,samplerate=44100}:rtp{sdp=rtsp://:9000/test} :sout-all :sout-keep
Why sound does't play whith encoder (H.264 + MP3(MP4))?
Streaming is works great but no audio sound...
Try using acodec=mp3 or acodec=aac since they're supported formats for FLV containers.
example:
:sout=#transcode{vcodec=h264,acodec=mp3,ab=128,channels=2,samplerate=44100}:http{mux=ffmpeg{mux=flv},dst=:8080/} :sout-all :sout-keep
I'm trying to convert DVD iso files to mp4 using HandbrakeCLI. I use the following line in a batch file:
D:\HandBrakeCLI.exe -i "D:\input.iso" -o "D:\output.mp4" --no-markers --width "720" --height "480" --preset "HQ 480p30 Surround" --encoder "mpeg2" --audio-lang-list "eng"
When I do this, I must then extract the audio from the file, using the following line:
D:\eac3to\eac3to.exe "D:\output.mp4" "D:\output.wavs" -down16
However, when I attempt to extract the audio, I get the error message
The format of the source file could not be detected.
Is there anything wrong with my former line of code that's causing the mp4 to get screwed up?
Minor side question: I'm also trying to get handbrake to remove subtitles and also only keep English audio, do you know what code could be used for that? I started a bit there with the --audio-lang-list "eng" but I'm now sure what to do from there.
Thanks a lot in advance!
You need to use a valid audio format. .wavs is not valid. You have to use an available audio codec to output to the below for --aencoder. The default output audio for MP4 is .aac
av_aac
copy:aac
ac3
copy:ac3
eac3
copy:eac3
copy:truehd
copy:dts
copy:dtshd
mp3
copy:mp3
vorbis
flac16
flac24
copy:flac
opus
copy
Defaults for audio
av_mp4 = av_aac
av_mkv = mp3
You need to pass none for no subtitles
-s none
And define only eng track like you were doing
--audio-lang-list eng
Check out the Handbrake CLI Documentation for the command line code:
https://handbrake.fr/docs/en/latest/cli/cli-guide.html
You can also try using a different program once you extract the audio. A program like XMediaRecode. It can also remux audio and video and convert other audio formats to wav
https://www.videohelp.com/software/XMedia-Recode
I am new to ffmpeg, i have spend more than 10 days on finding any way to do muxing in mp4 format with audio and vedio in streams buffer not in file.
What i want is to mux mp4 format audio & vedio in a streams.
I am able to do muxing mp4 format in file. But not able to get mux mp4 format in streams buffer.
Till now i have tried this:
avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size, 1, &bd, NULL, &write_packet, NULL);
By calling this avio_alloc_context and passing reference of write_packet function. I am able to get call write_packet. But when i write the data coming in write_packet in a file, and making mp4 file with that. The resultant mp4 file is not working. There is no vedio or audio information available by watching this file in Media Info.
The header is written , then loop is written and finally trailer is written, but not getting success in running final file.
Is there any good way to do mux in mp4 format in streams, so please tell me.
Kindly help to me to do this.
Thanks in advance.
I have an application for iPAD.
This application records the voice of the microphone.
The audio formats of the item must be PCM, MP3 and WAV files. The MP3 file I get it starting from the original raw file and then convert using LAME.
Unfortunately I have not found any example that allows me to convert a PCM file to a WAV file.
I just noticed that if I put the file extension to WAV format, starting from the raw application saves without problems, so I think that there is no type conversion from PCM WAV files.
Correct?
PS: Sorry for my english ... I use Google Translate
WAV is some kind of a box. PCM is in the box. There are many container formats like MP4. MP4 can contain audio, video or both. It can also contain multiple video or audio streams. Or zip files. Zip files can contain text files. But zip files can also contain images, pdfs,... But you can't say "how can I convert a zip file to the text file inside the zip".
If you want to convert PCM data to a WAVE file you should not many problems because WAV files are quite simple files. Take a look at this:
(See also WAVE PCM soundfile format.)
You first need that header and after you can just append all your pcm data (see the data field).
Converting PCM to WAV isn't too hard. PCM and WAV both format contains raw PCM data, the only difference is their header(wav contains a header where pcm doesn't). So if you just add wav header then it will do the tricks. Just get the PCM data and add the wav header on top of the PCM data. To add wav header with PCM data, check this link.
I was working on a system where it accepts only wav files, but the one I was receiving from amazon Polly was pcm, so finally did this and got my issue resolved. Hope it helps someone. This is an example of nodejs.
// https://github.com/TooTallNate/node-wav
const FileWriter = require('wav').FileWriter
let audioStream = bufferToStream(res.AudioStream);
var outputFileStream = new FileWriter(`${outputFileFolder}/wav/${outputFileName}.wav`, {
sampleRate: 8000,
channels: 1
});
audioStream.pipe(outputFileStream);
function bufferToStream(binary) {
const readableInstanceStream = new Stream.Readable({
read() {
this.push(binary);
this.push(null);
}
});
return readableInstanceStream;
}