Convert GPGSV to GPRMC or to Google map location? - nmea

Currently I am using U-Blox 7m GPS with Arduino, on run time, it is giving value on Serial Monitor like:
$GPRMC,204843.00,V,,,,,,,151019,,,N*79
$GPVTG,,,,,,,,,N*30
$GPGGA,204843.00,,,,,0,03,11.41,,,,,,*69
$GPGSA,A,1,28,30,07,,,,,,,,,,11.46,11.41,1.00*08
$GPGSV,3,1,12,01,16,047,,02,02,199,,04,01,168,,06,31,170,*74
$GPGSV,3,2,12,07,20,136,24,11,11,042,,13,41,260,,15,18,290,09*76
$GPGSV,3,3,12,17,75,340,,19,69,256,,28,46,029,20,30,53,130,22*79
$GPGLL,,,,,204843.00,V,N*43
it's about an hour, and its continuously giving this value, how do I convert $GPGSV value to google map location???

$GPRMC will give you the data you want. A typical sentence looks like this:
$GPRMC,225446,A,4916.45,N,12311.12,W,000.5,054.7,191194,020.3,E*68
In that example, the coordinates are 49 deg 16.45 min North, 123 deg 11.12 min West. (Example from: http://aprs.gids.nl/nmea/#rmc)
In your data however, you don't have a good enough signal to get a fix. That's why you're not seeing coordinates in your data.

Related

How to get a timestamp (for Time Series analysis) when data is in seconds, using python?

I have a dataset (see image 1) that I want to analyze for anomalies.
The story of the dataset is the following:
I measured temperature (TPU, CPU), memory (MemUsed) and time (Time) for each inference (an image classification task).
I also made a cumulative sum of the 'Time' column in order to get the time that the whole process of classifying 70000 images will take. This is around 7000 seconds.
The following code is what I get when trying to get a Timestamp (aka '_time'). And in Image 2 you will see how does it look.
#Return the index with frequency (RangeIndex to DateTimeIndex)
df1_MAX['_time']=pd.to_datetime(df1_MAX['TimeTotal'])#string to DatetimeIndex
df1_MAX=df1_MAX.set_index('_time')
df1_MAX
As I am working with data in seconds, how can I get a proper Timestamp? what format do I need to use?
Thank you
Image 1
Image 2
------------------EDIT--------------
Using timedelta64
---------------EDIT------------
I changed 'TimeTotal' to ms. And also put 'timedelta64[]' in ms.
df1_MAX['_time']= pd.to_datetime("2020-07-06 10:53:00")+df1_MAX['TimeTotal'].astype('timedelta64[ms]')
df1_MAX=df1_MAX.set_index('_time')
df1_MAX
Can make it a time delta then
df['TIME']= df['Time'].astype('timedelta64[s]')
If you wanted to create a datetime stamp, say you began at 2021-07-06 10:53:02. Just add the timedelta to the start datettime.
Data
df = pd.DataFrame({"Time": [121.83,101.22],"score": [1,2],"Label": ["trimaran", "trimaran"]})
Solution
df['DateTime']= pd.to_datetime("2021-07-06 10:53:02")+df['Time'].astype('timedelta64[s]')
Outcome
Time score Label DateTime
0 121.83 1 trimaran 2021-07-06 10:55:03
1 101.22 2 trimaran 2021-07-06 10:54:43

How to aggregate data by period in a rrdtool graph

I have a rrd file with average ping times to a server (GAUGE) every minute and when the server is offline (which is very frequent for reasons that doesn't matter now) it stores a NaN/unknown.
I'd like to create a graph with the percentage the server is offline each hour which I think can be achieved by counting every NaN within 60 samples and then dividing by 60.
For now I get to the point where I define a variable that is 1 when the server is offline and 0 otherwise, but I already read the docs and don't know how to aggregate this:
DEF:avg=server.rrd:rtt:AVERAGE CDEF:offline=avg,UN,1,0,IF
Is it possible to do this when creating a graph? Or I will have to store that info in another rrd?
I don't think you can do exactly what you want, but you have a couple of options.
You can define a sliding window average, that shows the percentage of the previous hour that was unknown, and graph that, using TRENDNAN.
DEF:avg=server.rrd:rtt:AVERAGE:step=60
CDEF:offline=avg,UN,100,0,IF
CDEF:pcavail=offline,3600,TREND
LINE:pcavail#ff0000:Availability
This defines avg as the 1-min time series of ping data. Note we use step=60 to ensure we get the best resolution of data even in a smaller graph. Then we define offline as 100 when the server is there, 0 when not. Then, pcavail is a 1-hour sliding window average of this, which will in effect be the percentage of time during the previous hour during which the server was available.
However, there's a problem in that RRDTool will silently summarise the source data before you get your hands on it, if there are many data points to a pixel in the graph (this won't happen if doing a fetch of course). To get around that, you'd need to have the offline CDEF done at store time -- IE, have a COMPUTE type DS that is 100 or 0 depending on if the avg DS is known. Then, any averaging will preserve data (normal averaging omits the unknowns, or the xff setting makes the whole cdp unknown).
rrdtool create ...
DS:rtt:GAUGE:120:0:9999
DS:offline:COMPUTE:rtt,UN,100,0,IF
rrdtool graph ...
DEF:offline=server.rrd:offline:AVERAGE:step=3600
LINE:offline#ff0000:Availability
If you are able to modify your RRD, and do not need historical data, then use of a COMPUTE in this way will allow you to display your data in a 1-hour stepped graph as you wanted.

HLS protocol: get absolute elapsed time during a live streaming

I have a very basic question and I didn't get if I googled wrong or if the answer is so simple that I haven't seen it.
I'm implementing a web app using hls.js as Javascript library and I need a way to get the absolute elapsed time of a live streaming e.g. if a user join the live after 10 minutes, I need a way to detect that the user's 1st second is 601st second of the streaming.
Inspecting the streaming fragments I found some information like startPTS and endPTS, but all these information were always related to the retrieved chunks instead of the whole streaming chunks e.g. if a user join the live after 10 minutes and the chunks duration is 2 seconds, the first chunk I'll get will have startPTS = 0 and endPTS = 2, the second chunk I'll get will have startPTS = 2 and endPTS = 4 and so on (rounding the values to the nearest integer).
Is there a way to extract the absolute elapsed time as I need from an HLS live streaming ?
I'm having the exact same need on iOS (AVPlayer) and came with the following solution:
read the m3u8 manifest, for me it looks like this:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:410
#EXT-X-TARGETDURATION:8
#EXTINF:8.333,
410.ts
#EXTINF:8.333,
411.ts
#EXTINF:8.334,
412.ts
#EXTINF:8.333,
413.ts
#EXTINF:8.333,
414.ts
#EXTINF:8.334,
415.ts
Observe that the 409 first segments are not part of the manifest
Multiply EXT-X-MEDIA-SEQUENCE by EXT-X-TARGETDURATION and you have an approximation of the clock time for the first available segment.
Let's also notice that each segment is not exactly 8s long, so when I'm using the target duration, I'm actually accumulating an error of about 333ms per segment:
410 * 8 = 3280 seconds = 54.6666 minutes
In this case for me the segments are always 8.333 or 8.334, so by EXTINF instead, I get:
410 * 8.333 = 3416.53 seconds = 56.9421 minutes
These almost 56.9421 minutes is still an approximation (since we don't exactly know how many time we accumulated the new 0.001 error), but it's much much closer to the real clock time.

Strategy for data handling/custom colors in ShaderMaterial

I need strategic help to get going.
I need to animate a large network of THREE.LineSegments (about 150k segments, static) with custom (vertex) colors (RGBA). For each segment, I have 24 measured values over 7 days[so 5.04 × 10^7 measured values, or 2.016 × 10^8 vec4 color buffer array size, about 770Mb in float32array size].
The animation is going through each hour for each day in 2.5 second steps and needs to apply an interpolated color to each segment on a per-frame-basis (via time delta). To be able to apply an alpha value to the vertex colors I need to use a THREE.ShaderMaterial with a vec4 color attribute.
What I don´t get my head around is how to best handle that amount of data per vertex. Some ideas are to
calculate the RGBA values in render loop (between current color array and the one for the coming hour via interpolation with a time delta) and update the color buffer attribute[I expect a massive drop in framerate]
have a currentColor and nextColor buffer attribute (current hour and next hour), upload both anew in every step (2.5 sec) to the GPU and do the interpolation in the shader (with aditional time delta uniform)[seems the best option to me]
upload all data to the GPU initially, either with multiple custom attributes or one very large buffer array, and do the iteration and interpolation in the shader[might be even better if possible; I know I can set the offset for the buffer array to read in for each vertex, but not sure if that works as I think it does...]
do something in between, like upload the data for one day in a chunk instead of either all data or hourly data
Do the scenario and the ideas make sense? If so:
What would be the most suitable way of doing this?
I appreciate any additional advice.

Understanding audio file spectrogram values

I am currently struggling to understand how the power spectrum is stored in the kaldi framework.
I seem to have successfully created some data files using
$cmd JOB=1:$nj $logdir/spect_${name}.JOB.log \
compute-spectrogram-feats --verbose=2 \
scp,p:$logdir/wav_spect_${name}.JOB.scp ark:- \| \
copy-feats --compress=$compress $write_num_frames_opt ark:- \
ark,scp:$specto_dir/raw_spectogram_$name.JOB.ark,$specto_dir/raw_spectogram_$name.JOB.scp
Which gives me a large file with data point for different audio files, like this.
The problem is that I am not sure on how I should interpret this data set, I know that prior to this an fft is performed, which I guess is a good thing.
The output example given above is from a file which is 1 second long.
all the standard has been used for computing the spectogram, so the sample frequency should be 16 kHz, framelength = 25 ms and overlap = 10 ms.
The number of data points in the first set is 25186.
Given these informations, can I interpret the output in some way?
Usually when one performs fft, the frequency bin size can be extracted by F_s/N=bin_size where F_s is the sample frequency and N is the FFT length. So is this the same case? 16000/25186 = 0.6... Hz/bin?
Or am I interpreting it incorrectly?
Usually when one performs fft, the frequency bin size can be extracted by F_s/N=bin_size where F_s is the sample frequency and N is the FFT length.
So is this the same case? 16000/25186 = 0.6... Hz/bin?
The formula F_s/N is indeed what you would use to compute the frequency bin size. However, as you mention N is the FFT length, not the total number of samples. Based on the approximate 25ms framelength, 10ms hop size and the fact that your generated output data file has 98 lines of 257 values for some presumably real-valued input, it would seem that the FFT length used was 512. This would give you a frequency bin size of 16000/512 = 31.25 Hz/bin.
Based on this scaling, plotting your raw data with the following Matlab script (with the data previously loaded in the Z matrix):
fs = 16000; % 16 kHz sampling rate
hop_size = 0.010; % 10 millisecond
[X,Y]=meshgrid([0:size(Z,1)-1]*hop_size, [0:size(Z,2)-1]*fs/512);
surf(X,Y,transpose(Z),'EdgeColor','None','facecolor','interp');
view(2);
xlabel('Time (seconds)');
ylabel('Frequency (Hz)');
gives this graph (the dark red regions are the areas of highest intensity):

Resources