avisynth "transparent" overlaying audio - audio

How overlay audio onto other audio?
I want to make a FadeIO(..) feature for audio channels of clips in avisynth.
I have this script
video = DirectShowSource("Z:\video\vvv.mp4", fps=fps_count, audio=true, convertfps=true).AssumeFPS(fps_count, 1).ConvertToYV12()
audio = BlankClip(audio_rate=audio_sample_rate, channels=2, length = logo_timeout).ResampleAudio(audio_sample_rate)
blank = BlankClip(logo_timeout, res_width, res_height, pixel_type = "RGB32", fps = fps_count).ConvertToYV12()
blank = Overlay(blank, blank_logo, mode = "blend", x = 0, y = 0)
blank = AudioDub(blank, blank_logo).AssumeFPS(fps_count, 1)
And smooth overlaying of parts of videos is like this:
blank.Trim(0 * fps_count, transparent_overlay_latency * fps_count).Overlay(video.Trim(0 * fps_count, transparent_overlay_latency * fps_count), mode="blend", mask=logo.showalpha(), x = 0, y = 0)
But this works only for videos. The audio starts when second clip (video.Trim()) is already faded In. I want the sound of this video "grows up" , when the clip appears.

As the #SeedManc said in commentary under question:
avisynth.nl/index.php/Dissolve, it allows for gradual transition between two clips, both in terms of video and audio.
This method is very simple and is the answer for my question

Related

How to Play Sound in Odoo with Single Device Running

I have designed a method when product not found in using barcode scanning I put this code in that product not found.
#api.multi
def _product_sound(self)
PyAudio = pyaudio.PyAudio
bitrate = 8000
frq = 500
LENGTH = 2
if frq > bitrate:
bitrate = frq+100
numberofframe = int(bitrate * LENGTH)
restframe = numberofframe % bitrate
wave = ''
for x in range(numberofframe):
wave = wave+chr(int(math.sin(x/((bitrate/frq)/math.pi))*124+128))
for x in range(restframe):
wave = wave+chr(128)
p = PyAudio()
stream = p.open(format = p.get_format_from_width(1), channels = 1,rate = bitrate,output = True)
stream.write(wave)
stream.stop_stream()
stream.close()
p.terminate()
When I try this code in single system its work perfectly. but when I try to using in the different device that time sound not be generated.
So how to play sound in odoo with different system or current system ?

Pyueye image saving with wrong resolution

personally pretty new to programming and I am trying to save a high mp Image from an IDS camera using the pyueye module with python.
my Code works to save the Image, but the Problem is it saves the Image as a 1280x720 Image inside a 4192x3104
I have no idea why its saving the small Image inside the larger file and am asking if anyone knows what i am doing wrong and how can I fix it so the Image is the whole 4192x3104
from pyueye import ueye
import ctypes
hcam = ueye.HIDS(0)
pccmem = ueye.c_mem_p()
memID = ueye.c_int()
hWnd = ctypes.c_voidp()
ueye.is_InitCamera(hcam, hWnd)
ueye.is_SetDisplayMode(hcam, 0)
sensorinfo = ueye.SENSORINFO()
ueye.is_GetSensorInfo(hcam, sensorinfo)
ueye.is_AllocImageMem(hcam, sensorinfo.nMaxWidth, sensorinfo.nMaxHeight,24, pccmem, memID)
ueye.is_SetImageMem(hcam, pccmem, memID)
ueye.is_SetDisplayPos(hcam, 100, 100)
nret = ueye.is_FreezeVideo(hcam, ueye.IS_WAIT)
print(nret)
FileParams = ueye.IMAGE_FILE_PARAMS()
FileParams.pwchFileName = "python-test-image.bmp"
FileParams.nFileType = ueye.IS_IMG_BMP
FileParams.ppcImageMem = None
FileParams.pnImageID = None
nret = ueye.is_ImageFile(hcam, ueye.IS_IMAGE_FILE_CMD_SAVE, FileParams, ueye.sizeof(FileParams))
print(nret)
ueye.is_FreeImageMem(hcam, pccmem, memID)
ueye.is_ExitCamera(hcam)
The size of the image depends on the sensor size of the camera.By printing sensorinfo.nMaxWidth and sensorinfo.nMaxHeight you will get the maximum size of the image which the camera captures. I think that it depends on the model of the camera. For me it is 2056x1542.
Could you please elaborate on the last sentence of the question.

how to concatenate two wav audio files with 30 seconds of white sound using NAudio

I need to concatenate two wav audio files with 30 seconds of whute sound between them.
I want to use the NAudio library - or with any other way that work.
How to do it ?
( the different from any other question is that i need not only to make one audio file from two different audio files .. i also need to add silent between them )
Assuming your WAV files have the same sample rate and channel count, you can concatenate using FollowedBy and use SignalGenerator combined with Take to get the white noise.
var f1 = new AudioFileReader("ex1.wav");
var f2 = new SignalGenerator(f1.WaveFormat.SampleRate, f1.WaveFormat.Channels) { Type = SignalGeneratorType.White, Gain = 0.2f }.Take(TimeSpan.FromSeconds(5));
var f3 = new AudioFileReader("ex3.wav");
using (var wo = new WaveOutEvent())
{
wo.Init(f1.FollowedBy(f2).FollowedBy(f3));
wo.Play();
while (wo.PlaybackState == PlaybackState.Playing) Thread.Sleep(500);
}

How to play a spotify music stream

First of all, i am new to audio-programming, so bear with me.
I am trying to play spotify music with NAudio or BASS.Net or any other .net audio-library.
As far as i known, libspotify delivers music as raw PCM data. what is the sample rate of spotify stream (libspotify)?
From the spotify docs:
Samples are delivered as integers, see sp_audioformat. One frame consists of the same number of samples as there are channels. I.e. interleaving is on the sample level.
When i try to play a song, spotify makes a callback with a 8192 byte buffer
channels = 2
sample_rate = 44100
num_frames = 2048
I need a little help translating this information to NAudio terms.
I have also tried with a spotify to Bass.Net sample (BASSPlayer.cs). But i haven't heard a single note from my speakers yet.
I have tried to play an mp3-song with NAudio and Bass.NET and this works fine, so the speaker volume is ok.
https://github.com/Alxandr/SpotiFire/blob/master/SpotiFire.Server/BASSPlayer.cs
There is breakthrough with NAudio. This is what i came up with, by using the trial and error method. I'm not sure if this is the right way to calculate the parameters from sampleRate/channels...
But the song is playing :-)
IWavePlayer waveOutDevice = new WaveOut();
using (var pcmStream = new FileStream(PcmFile, FileMode.Open))
{
const int songDuration = 264000;
const int sampleRate = 44100;
const int channels = 2;
var waveFormat = WaveFormat.CreateCustomFormat(WaveFormatEncoding.Pcm, sampleRate * channels, 1, sampleRate * 2 * channels, channels, 16);
var waveStream = new RawSourceWaveStream(pcmStream, waveFormat);
waveOutDevice.Init(waveStream);
waveOutDevice.Play();
Thread.Sleep(songDuration);
waveOutDevice.Stop();
waveStream.Close();
waveOutDevice.Dispose();
}

Media Foundation : mpeg4 stream from camera gets distorted when GOV length is greater than 1

I am using Media Foundation on client side to display live mpeg4 stream from AXIS Camera through RTSP server.
Client side video works very fine if i set GOV # camera = 1, i.e. camera will only send I-Frames. But if GOV is increased and camera starts sending P-Frames also, my video suddenly gets distorted at regular intervals. I cannot set GOV = 1 for always because it consumes a lot of bandwidth.
Following is the code for RequestSample method where i supply Samples to Media Foundation :
RTPFrame frame = null;
byte[] frameBytes = null;
frame = _VideoJitter.GetNextFrame();
frameBytes = frame.GetFrameAsBytes();
frame.FrameType= RTPFrame.PredictFrameType(frameBytes);
_videoEncapsulatedSample.ReadSampleFrom(frameBytes);
videoSample = _videoEncapsulatedSample.MfSample;
long timestamp = nextSampleTimestamp ?? 0;
videoSample.SetSampleTime(timestamp);
duration_video = (long)GetPresentationTime(frame);
videoSample.SetSampleDuration(duration_video);
nextSampleTimestamp = timestamp + duration_video;
if (frame.FrameType == FrameType.IFrame)
{
videoSample.SetUINT32(MFAttributesClsid.MFSampleExtension_CleanPoint, 1);
}
return videoSample;
Do i need to set any attribute for processing P-Frames??
Any help would be highly appreciated....
Update (2012/02/22) :
I ran some statistics and found that some times I-frames never reach client, i.e. suppose GOV = 15, so every 15th frame should be an I-Frame, but sometimes (at irregular intervals) client receives an I-Frame after 28 or 30 or 59 P-frames.
Any pointers ??
Thanks,
Prateek

Resources