Wavesurfer: No syncronize peaks using audio files with start time - audio

I have a audio file (mka) that have a start time, if I perform getCurrentTime method after loading I can read the start position, for instance 3 seconds.
But when I load the file and the peaks is generated, it is 3 seconds not syncronized: How I can draw the peaks with these displacement? (3 seconds)
I am using MediaElementWebAudio and fails if I use the internal peaks generator or if I load a json peaks file.
Thanks in advance,

Related

why in Databricks the last part of running takes a lot of time?

I am using Databricks to create an algorithm for big data. I am wondering why the last 1% of my running process takes a lot of time?
I am writing the result in S3, the result for 111991 data (out of 116367) is done in 5 minutes and just for the last 5000 takes more than a hour!!!!!
can I fix this issue?
in the following picture it takes hour 119 become 120, but it came to 199 in a few minutes
Please check you are writing file in one shot or writing in one chunk.
If you are writing in one shot some time it switching log will take time. Also check if you are printing logs then it may take time.

Spark Structured Streaming metrics: Why process rate can be greater than input rate?

How come the process rate can be greater than the input rate?
From my understanding, process rate is the rate by which spark can process arriving data, ie, the process capacity. If so, the process rate must be on average lower or equal to the input rate. If it is lower, we know we need more processing power, or rethink about trigger time.
I am basing my understanding on this blog post and common sense, but I might be wrong. I looking for the formal formula in the source code while writing this question, as well.
This is an example where the process rate is constantly greater than the input rate:
You can see that on averege we have 200-300 records being processed per sec, whereas we have 80-120 records arriving per sec.
Setup background: Spark 3.x reading from Kafka and writing to Delta.
Thank you all.
Process rate more than input rate could mean its processing much faster than input rate. i.e it could process 300-400 per sec although event rate is 100 per sec. For ex: lets say ~100 per sec is the input rate and spark is able to process 100 records within half a sec than it means it can process 100 more in the next half of the sec and on an average this would lead to ~200 process rate.
In screenshot attached it could be interpreted as
It can process ~3000 records within each batch(~200*~15s) with 15s processing time for each batch (based on ~15000 ms seen in latency chart) but its processing around ~1000 records within each batch with 15s processing time.

HLS protocol: get absolute elapsed time during a live streaming

I have a very basic question and I didn't get if I googled wrong or if the answer is so simple that I haven't seen it.
I'm implementing a web app using hls.js as Javascript library and I need a way to get the absolute elapsed time of a live streaming e.g. if a user join the live after 10 minutes, I need a way to detect that the user's 1st second is 601st second of the streaming.
Inspecting the streaming fragments I found some information like startPTS and endPTS, but all these information were always related to the retrieved chunks instead of the whole streaming chunks e.g. if a user join the live after 10 minutes and the chunks duration is 2 seconds, the first chunk I'll get will have startPTS = 0 and endPTS = 2, the second chunk I'll get will have startPTS = 2 and endPTS = 4 and so on (rounding the values to the nearest integer).
Is there a way to extract the absolute elapsed time as I need from an HLS live streaming ?
I'm having the exact same need on iOS (AVPlayer) and came with the following solution:
read the m3u8 manifest, for me it looks like this:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:410
#EXT-X-TARGETDURATION:8
#EXTINF:8.333,
410.ts
#EXTINF:8.333,
411.ts
#EXTINF:8.334,
412.ts
#EXTINF:8.333,
413.ts
#EXTINF:8.333,
414.ts
#EXTINF:8.334,
415.ts
Observe that the 409 first segments are not part of the manifest
Multiply EXT-X-MEDIA-SEQUENCE by EXT-X-TARGETDURATION and you have an approximation of the clock time for the first available segment.
Let's also notice that each segment is not exactly 8s long, so when I'm using the target duration, I'm actually accumulating an error of about 333ms per segment:
410 * 8 = 3280 seconds = 54.6666 minutes
In this case for me the segments are always 8.333 or 8.334, so by EXTINF instead, I get:
410 * 8.333 = 3416.53 seconds = 56.9421 minutes
These almost 56.9421 minutes is still an approximation (since we don't exactly know how many time we accumulated the new 0.001 error), but it's much much closer to the real clock time.

Thumbnail image generation taking too long time

I am generating thumbnail from Azure media services.
Some times it takes 2 minutes for generating thumbnail of time frame 50 seconds and some times it takes 10 minutes for the same 50 second of time frame to create thumbnail while there is no other job in queue.
please suggest any solution for this issue.
Thanks,
Do you have any Reserved Units on your account?
By default, if you have no Media Processing units enabled, it will just submit your job to the general shared pool where there is no SLA and you may stay queued until a resource is available (depends on how busy the region is).
See - https://learn.microsoft.com/en-us/azure/media-services/media-services-scale-media-processing-overview

Finding time of testing data in Sphinx train.

I am training data via pocketsphinx and sphinxtrain. We can see our training data time in log file. like my current training data is shown as
Phase 5: Determine amount of training data, see if n_tied_states seems reasonable.
Estimated Total Hours Training: 1.00766111111111
After training, testing is done. for testing I have added 20 files. But I dont know what is length of these files. Finding it manually is a hard task as I am going to increase testing data.
So is there any log file or any other (than manual) way I can check my testing data time.
I just found it, I am posting own answer so it may be helping for others
You can find it under logdir/decode/dbname-1-1.log
while dbname is your main folder name in my case logdir/decode/tester-1-1.log.
Open this file and there will be a line
INFO: batch.c(778): TOTAL 81.24 seconds speech, 30.43 seconds CPU, 37.54 seconds wall
Here TOTAL 81.24 seconds speech is time of my testing audio data.

Resources