I want to merge two audio files and produce one final file. For example if file1 has length of 5 minutes and file2 has length of 4 minutes, I want the result to be a single 5 minutes file, because both files will start from 0:00 seconds and will run together (i.e overlapping.)
You can use the APIs in the Windows.Media.Audio namespace to create audio graphs for audio routing, mixing, and processing scenarios. For how to create audio graphs please reference this article.
An audio graph is a set of interconnected audio nodes. The two audio files you want to merge supply the "audio input nodes", and "audio output nodes" are the destination single file for audio processed by the graph.
The scenario 4 of AudioCreatio official sample - Submix, just provide the feature you want. Provide two files it will output the mixed audio, but change the output node to AudioFileOutputNode for saving to a new file since the sample create AudioDeviceOutputNode for playing.
Related
I'm saving sensor data at 64 samples per second into a csv file. The file is about 150megs at end of 24 hours. It takes a bit longer than I'd like to process it and I need to do some processing in real time.
value = str(milivolts)
logFile.write(str(datet) + ',' + value + "\n")
So I end up with single lines with date and milivolts up to 150 megs. At end of 24 hours it makes a new file and starts saving to it.
I'd like to know if there is a better way to do this. I have searched but can't find any good information on a compression to use while saving sensor data. Is there a way to compress while streaming / saving? What format is best for this?
While saving the sensor data, is there an easy way to split it into x megabyte files without data gaps?
Thanks for any input.
I'd like to know if there is a better way to do this.
One of the simplest ways is to use a logging framework, it will allow you to configure what compressor to use (if any), the approximate size of a file and when to rotate logs. You could start with this question. Try experimenting with several different compressors to see if speed/size is OK for your app.
While saving the sensor data, is there an easy way to split it into x megabyte files without data gaps?
A logging framework would do this for you based on the configuration. You could combine several different options: have fixed-size logs and rotate at least once a day, for example.
Generally, this is accurate up to the size of a logged line, so if the data is split into lines of reasonable size, this makes life super easy. One line ends in one file, another is being written into a new file.
Files also rotate, so you can have order of the data encoded in the file names:
raw_data_<date>.gz
raw_data_<date>.gz.1
raw_data_<date>.gz.2
In the meta code it will look like this:
# Parse where to save data, should we compress data,
# what's the log pattern, how to rotate logs etc
loadLogConfig(...)
# any compression, rotation, flushing etc happens here
# but we don't care, and just write to file
logger.trace(data)
# on shutdown, save any temporary buffer to the files
logger.flush()
I am working on AWS MediaConverter and trying to create a Node js API which converts .mp4 format to .wav format.
I have the api is working correctly, however it is creating a new job for each individual .mp4 file.
Is it possible to have one MediaConvert job and use that for every file in the input_bucket instead of creating a new job for every file?
I have tried going through AWS MediaConvert Documentation and various online articles, but I am not able to see any answer to my question.
I have tried to implement my api in following steps :
create an object of class
AWS.MediaConvert()
create a job template using
MediaConvert.createJobTemplate
create a job using
MediaConvert.createJob
There is generally a 1 : 1 relationship between inputs and jobs in MediaConvert.
A MediaConvert job reads an input video from S3 (or HTTP server) and converts the video to output groups that in turn can have multiple outputs. A single media convert job can create multiple versions of the input video in different codecs and packages.
The exception to this is when you want to join more than one input file into a single asset (input stitching).
In this case you can have up to 150 inputs in your job. AWS Elemental MediaConvert subsequently creates outputs by concatenating the inputs in the order that you specify them in the job.
Your question does however suggests that input stitching is not what you are looking to achieve. Rather, you are looking to transcode multiple inputs from the source bucket.
If so, you would need to create a job for each input.
Job Templates (as well as Output Presets) work to speed up your job setup by providing groups of recommended transcoding settings. Job templates apply to an entire transcoding job whereas output presets apply to a single output of a transcoding job.
References:
Step 1: Specify your input files : https://docs.aws.amazon.com/mediaconvert/latest/ug/specify-input-settings.html
Assembling multiple inputs and input clips with AWS Elemental MediaConvert : https://docs.aws.amazon.com/mediaconvert/latest/ug/assembling-multiple-inputs-and-input-clips.html
Working with AWS Elemental MediaConvert job templates : https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-job-templates.html
Working with AWS Elemental MediaConvert output presets : https://docs.aws.amazon.com/mediaconvert/latest/ug/working-with-presets.html
how many maximum no. of channels in an audio file we can create with FFMPEG amerge filter?
We have a requirement to merge multiple single channel audio files into multi channel single audio file.
Each channel represents the speaker in the audio file.
I tried amerge filter and could do it upto 8 files. I am getting blank audio file when I try to do it for 10 audio files, and I think the FFMPEG amerge filter command doesn't produce any error either.
Can I create N no. of multi-channel audio files with N no. of files? Here N may be 100+? Is it possible?
I am new to this audio api etc. so any guidance is appreciated.
how many maximum no. of channels in an audio file we can create with FFMPEG amerge filter? We have a requirement to merge multiple single channel audio files into multi channel single audio file.
Max inputs is 64. According to ffmpeg -h filter=amerge:
inputs <int> ..F.A...... specify the number of inputs (from 1 to 64) (default 2)
Or look at the source code at libavfilter/af_amerge.c and refer to SWR_CH_MAX.
Can I create N no. of multi-channel audio files with N no. of files? Here N may be 100+? Is it possible?
Chain multiple amerge filters with a max of 64 inputs per filter. Or use the amix filter that has a max of 32767.
I have OSGB models that are created with either software: AgiSoft, Bentley, SkylineSoft, Pix4D. Currently each tile in the output folder is divided into at least two files, one osgb and one or several texture file (jpg).
I have a problem in the deployment with the number of output files, in large models it can reach millions of files and when I copy them to the target computer it takes a long time. Is it possible to export with the above softwares to an osgb format that one file can contain several tiles / textures?
Thank you!
I'm wondering if it's possible to, in real time, concatenate a series of mp3 files to form a live stream.
For example, in some directory I have file1.mp3, file2.mp3, file3.mp3 - each file is 1 minute in duration.
I want to load an mp3 stream which I could load in a web-browser or on a phone, etc which will join all these files together to form a 3 minute stream. However, say I'm 2 minutes into the stream and upload another file to that directory - file4.mp3 - that is also of 1 minute duration. I would want that to automatically be added to the end of my live stream, such that when file3.mp3 is finished file4.mp3 will start straight away.
I hope I explained myself well. I am just keen to know:
1) If there is a name for what I am trying to achieve?
2) Whether what I am doing is possible with current technologies.
I think HTTP Live Streaming is what you're looking for. http://en.m.wikipedia.org/wiki/HTTP_Live_Streaming