Mediainfo.dll duration parameter doesn't return seconds - visual-c++

I am working on a Visual C++ project, and I need to get duration of movie from a chosen file. I use Mediainfo.dll to retrieve this information (movieFile->General->DurationString;). The problem is when duration is more then one hour, I don't get seconds, i.e. seconds are always displayed as 00. When duration is less then one hour, everything is fine. I had also tried with movieFile->General->DurationMillis;, which returns duration in miliseconds, but I also get 00 seconds. Does anyone knows what might be the problem?

I don't know which intermediate layer you use, but from MediaInfo, MediaInfo::Get(Stream_General, 0, "Duration") returns a value in milliseconds for sure.
MediaInfo::Get(Stream_General, 0, "Duration/String3") will return duration in "HH:MM:SS.mmm" format.
Jérôme, developer of MediaInfo

Related

Open trip planner script slower on days other than today

I'm making use of open trip planner using the jython scripting method explained here: http://docs.opentripplanner.org/en/latest/Scripting/
(specifically 'Using OTP as a library') and am using a script very similar to their example script
For testing purposes I have two csv files containing 40 locations each. The locations are inside the Netherlands and I have loaded both the dutch gtfs and map. The strange thing is that the code that calculates the public transport trip times (line 32 in the example script: res = spt.eval(colleges), using modes WALK,TRANSIT) takes longer when I specify a day other than today.
An example:
req.setDateTime(2018, 12, 8, 16, 00, 00) # today
spt.eval(my_data) # -> takes ~7 - 10 seconds
req.setDateTime(2018, 12, 7, 16, 00, 00) # yesterday
spt.eval(my_data) # -> takes ~30 - 40 seconds
When not setting req.setDateTime(), spt.eval() is even faster. Note that I ran the script on the 6th, for the 6th, as well, and it was fast then too, so it's certainly related to "today" and not specifically the 8th.
Of course my primary question is, how do I make it fast for days other than today? (my main interest is actually tomorrow)
Is it related to when the OTP instance is started or is it some internal optimization? I don't think it's related to the building of the graph because that was built a couple of days ago. I was looking into providing a day or datetime setting when initializing OTP but am unable to find that in the docs.
(I haven't tried messing with my system time yet, but that's also an option I'm not very fond of). Any ideas or comments are welcome. If necessary I will provide a reproducible sample tomorrow.
This problem was actually caused because of how I used req.setDateTime() in combination with req.setMaxTimeSec().
Basically, setMaxTimeSec() uses the date set by setDateTime() as a starting point, and defines a worstTime (aka the last possible time) to that date time + the maxTimeSec. However, if setDateTime() was not yet set when calling setMaxTimeSec(), the current date time is used instead. This will consequently cause problems when you happen to call setDateTime() AFTERWARDS. Example:
setMaxTimeSec(60*60) # Sets worst time to now + 1 hour
setDateTime(yesterday) # Sets departure time to yesterday
This example has a very long time window to search for solutions! Instead of only looking within an hour time, we are now looking in a window of 25 hours!
Anyway, a simple solution is to first call setDateTime(), and then setMaxTimeSec():
setDateTime(yesterday) # Sets departure time to yesterday
setMaxTimeSec(60*60) # Sets worst time to yesterday + 1 hour
Alternatively, if, for some reason, you can't switch these methods, you can always correct the setMaxTimeSec() with the time difference between now and your setDateTime()-value:
date = datetime.strptime('2019-01-08 21:00', '%Y-%m-%d %H:%M')
date_seconds = time.mktime(date.timetuple())
now_seconds = time.mktime(datetime.now().timetuple())
date_diff_seconds = int(round(date_seconds - now_seconds))
req.setMaxTimeSec(60*60 + date_diff_seconds)
req.setDateTime(date.year, date.month, date.day, date.hour, date.minute, 00)

Is it possible to get milliseconds from Python 3's native time library?

I have been trying to find a nice way to format a timestamp of the current date & time with milliseconds using Python 3's native time library.
However there's no directive for milliseconds in the standard documentation https://docs.python.org/3/library/time.html#time.strftime.
There's undocumented directives though, like %s which will give the unix timestamp. Is there any other directives like this?
Code example:
>>> import time
>>> time.strftime('%Y-%m-%d %H:%M:%S %s')
'2017-08-28 09:27:04 1503912424'
Ideally I would just want to change the trailing %s to some directive for milliseconds.
I'm using Python 3.x.
I'm fully aware that it's quite simple to get milliseconds using the native datetime library, however I'm looking for a solution using the native time library solely.
If you insist on using time only:
miliSeconds = time.time()%1*1000
time() returns accurately the time since the epoch. Since you already have the the date up to a second, you don't really care this is a time delta, since the remaining fraction is what you need to add anyway to what you have already to get the accurate date. %1 retrieves the fraction and then I convert the numbers to millisecond by multiplying by 1000.
note
Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second. While this function normally returns non-decreasing values, it
can return a lower value than a previous call if the system clock has
been set back between the two calls.
Taken from https://docs.python.org/3/library/time.html#time.time. But this means there is no way to do what you want. You may be able to do something more robust with process_time, but that would have to be elaborate.

create local movie out of `CVPixelBufferRef` which gets trimmed at the beginning after some time

I do have lots of CVPixelBufferRef which I would like to append to a movie in "real time", i.e. I get 50 - 60 CVPixelBufferRef per second (as they are frames) and would like to create a local video out of it.
Even better would be if I could have that video "floating", which means that it should always be two minutes long; as soon as I have a two minute video it should start to get trimmed at the beginning.
(How) is this possible?

Mediafilesegmenter inserts timed metadata ID3 tags in HLS stream but at the wrong point in time

I am inserting timed metadata in a HLS (HTTP Live Stream) using id3taggenerator and mediafilesegmenter. I have followed the instructions from Jake's Blog.
First, I create the id3tag using id3taggenerator:
id3taggenerator -o text.id3 -t "video"
Then add the tag to the id3macro file:
0 id3 /path/to/file/text.id3
And segment the video and insert the id3 tags with mediafilesegmenter:
mediafilesegmenter -M /path/to/id3macro -I -B "my_video" video.mp4
However, the timed metadata is inserted at the wrong point in time. Instead of showing up at the beginning of the video (point in time 0), it is added with a delay of 10 s (give or take 0.05 seconds, sometimes more, sometimes less).
I've wrote a simple iOS player app that logs whenever it is notified of an id3 tag in the video. The app is notified after playing the video for around 10 seconds of the ID3 tag. I've also tried with another id3macro file, with multiple timed metadata inserted in the video (around 0s, 5s, 7s), all showing up with the same approximate delay. I have also changed with the duration of the segment to 5s, but each time it's the same result.
The mediafilesegmenter I am using is Beta Version 1.1(140602).
Can anyone else confirm this problem, or pin-point to what am I doing wrong here?
Cheers!
I can confirm that I experience the same issue, using the same version of mediafilesegmenter:
mediafilesegmenter: Beta Version 1.1(140602)
Moreover, I can see that the packet with ID3 is inserted in the right moment in the stream. Eg. if I specify a 10 second delay – I can see that my ID3 is inserted in the end of the first 10 second segment.
However, it appears 10 seconds later in iOS notifications.
I can see the following possible reasons:
mediafilesegmenter inserts metadata packet in the right place, but timestamp is delayed by 10 seconds for some reason. Therefore, clients (eg. iOS player) show the tag 10 seconds later. Apple tools are not well documented so it's hard to verify.
Maybe iOS player receives metadata in time (because I know the tag was included in previous segment file) but issues a notification with 10 second delay, for whatever reason.
I cannot dig further because I don't have any Flash/desktop HLS players that support in-stream ID3 tags. If I had one, I would check whether desktop player will display/process ID3 in time, without delay. Then, it would mean the problem is iOS, not mediafilesegmenter.
Another useful thing to do would be – extracting MPEG-TS frame with ID3 tag from the segment file, and checking headers, looking for any strange things there (eg. wrong timestamp).
Update:
I did some more research including reverse engineering of TS segments created with Apple tools, and it seems:
mediafilesegmenter starts PTS (presentation time stamps) from 10 seconds while, for example, ffmpeg starts from 0.
mediafilesegmenter adds ID3 frame at the correct place in TS file but with wrong PTS that is 10 seconds ahead of what was specified in meta file.
While the first issue doesn't seem to affect the playback (as far as I understand it's more important that PTS goes on continuously, not where it starts), the second is definitely an issue and the reason you/we are experiencing the problem.
Therefore, iOS player receives ID3 frame in time but since its PTS is 10 seconds ahead – it waits 10 seconds before issuing a notification. As far as I can say now – some other players simply ignore this ID3 frame because it's in the wrong place.
As a workaround, you can shift all ID3 files by 10 seconds in your meta file, but obviously, you won't be able to put anything in the beginning.

explain me a difference of how MRTG measures incoming data

Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial

Resources