Convert NTP timestamp to utc - c#-4.0

Whats the easiest way to convert an NTP timestamp to utc. I know it's in NTP, I can convert it into any other format.
Thanks.
Bob.

As rene pointed out the NTP timestamp is made up of an integer and a fractional part. The integer part represents the number of seconds since base time, which is 1st Jan 1900. The fractional part represents the number of fractional units (a unit is 1/((2^32)-1)) in the second.
Also, the time representation is UTC.
Therefore, if you have an NTP Timestamp of say 14236589681638796952. NTP is a 64-bit unsigned fixed-point number. We can say:
UInt64 ntpTimestamp = 14236589681638796952;
The high 32 bits are given by:
UInt32 seconds = (UInt32)((ntpTimestamp >> 32) & 0xFFFFFFFF);
And the low 32 bits are are given by:
UInt32 fraction = (UInt32)(ntpTimestamp & 0xFFFFFFFF);
The number in seconds is equal to the most significant word or in this case:
seconds == 3314714339
The number of milliseconds can be calculated from the fraction using this calculation:
Int32 milliseconds = (Int32)(((Double)fraction / UInt32.MaxValue) * 1000);
Which is 12 in this case.
Thus the DateTime value is yielded from:
DateTime BaseDate = new DateTime(1900, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
DateTime dt = BaseDate.AddSeconds(seconds ).AddMilliseconds(milliseconds);
Therefore the NTP Timestamp of 14236589681638796952 is equal to the 14th Jan 2005 at 17:58:59 and 12 milliseconds UTC.

This works reliably, for me:
#define NTP_TIMESTAMP_DIFF (2208988800) // 1900 to 1970 in seconds
#define NTP_MAX_INT_AS_DOUBLE (4294967295.0) // Max value of frac
// take care of the endianness
reply_pkt.tx_time_sec = ntohl( reply_pkt.tx_time_sec ) ;
reply_pkt.tx_time_frac = ntohl( reply_pkt.tx_time_frac ) ;
// parse
time_t tx_time = ( time_t ) ( reply_pkt.tx_time_sec - NTP_TIMESTAMP_DIFF );
double frac = ((double)reply_pkt.tx_time_frac) / NTP_MAX_INT_AS_DOUBLE ; // 2^32 -1
struct tm *tm = gmtime(&tx_time) ;
char ts[49];
strftime(ts,48,"[%Y-%m-%d %H:%M:%S]",tm);
printf("NTP query: reply was %s\n",ts);
ntp_time_seconds = ((double)tx_time) + frac ;

Try something like this? I'm not sure on the format of that 'seconds since Jan 1 1900', but you can modify as you see fit.
long ntp = 3490905600;
DateTime start = new DateTime(1900, 1, 1);
DateTime dt = start.AddSeconds(ntp);
Console.WriteLine(dt.ToString());
Console.WriteLine(dt.ToUniversalTime().ToString());

Related

Groovy not converting seconds to hours properly

I have an integer that is:
19045800
I tried different code:
def c = Calendar.instance
c.clear()
c.set(Calendar.SECOND, 19045800)
echo c.format('HH:mm:ss').toString()
String timestamp = new GregorianCalendar( 0, 0, 0, 0, 0, 19045800, 0 ).time.format( 'HH:mm:ss' )
echo timestamp
Both return 10:30:00
19045800 seconds is supposed to be over 5000 hours. What am I doing wrong?
I'm not sure what you are looking for. But if your requirement is to calculate the number of hours, minutes, and the remainder of seconds for given seconds following code will work.
def timeInSeconds = 19045800
int hours = timeInSeconds/3600
int minutes = (timeInSeconds%3600)/60
int seconds = ((timeInSeconds%3600)%60)
println("Hours: " + hours)
println("Minutes: " + minutes)
println("Seconds: " + seconds)
You're attempting to use a Calendar, but what you're actually discussing is what Java calls a Duration – a length of time in a particular measurement unit.
import java.time.*
def dur = Duration.ofSeconds(19045800)
def hours = dur.toHours()
def minutes = dur.minusHours(hours).toMinutes()
def seconds = dur.minusHours(hours).minusMinutes(minutes).toSeconds()
println "hours = ${hours}"
println "minutes = ${minutes}"
println "seconds = ${seconds}"
prints:
hours = 5290
minutes = 30
seconds = 0

How to trim Full Year and Seconds in DateTime

I am getting Date as '2/12/2020 4:30:29 PM'. But i need trimmed year and seconds date time format like
'12/02/20 04:30 PM'
what is the equivalent function in MEL for getting above date time?
Thanks
I'm afraid there isn't an equivalent function in MEL to do this thing. BUT the good news is that you can create it! (yes, MEL needs more time functions to work...).
I've create this global function that you can use it (I assume 2 is day and 12 is month, if not you can change the order):
separator = "/";
space = " ";
hourSeparator = ":";
$global:stringDatetimeToArray = function(datetime)
{
array['year'] = substring(datetime, 6, 10);
array['month'] = substring(datetime, 3, 5);
array['day'] = substring(datetime, 0, 2);
array['hour'] = substring(datetime, 11, 13);
array['minute'] = substring(datetime, 14, 16);
array['seconds'] = substring(datetime, 17, 19);
array['meridian'] = substring(datetime, 20, 22);
return array;
};
concat(array['month'], separator, array['day'], separator, array['year'], space, array['hour'], hourSeparator, array[minute], space, array['meridian']);
My recommendation for you is to generate a method that converts a timestamp into an array of these values, and then you can work with these kind of issues easier than now. You can see an example on this github script
import re
date_str = '2/12/2020 4:30:29 PM'
sub_str = re.search(':\d+(.*?)\s',date_str).group(1)
date_str = date_str.replace(sub_str,'')
print(date_str)
Output: 2/12/2020 4:30 PM

Convert binary ( integer and fraction) from VHDL to decimal, negative value in C code

I have a 14-bit data that is fed from FPGA in vhdl, The NIos II processor reads the 14-bit data from FPGA and do some processing tasks, where Nios II system is programmed in C code
The 14-bit data can be positive, zero or negative. In Altera compiler, I can only define the data to be 8,16 or 32. So I define this to be 16 bit data.
First, I need to check if the data is negative, if it is negative, I need to pad the first two MSB to be bit '1' so the system detects it as negative value instead of positive value.
Second, I need to compute the real value of this binary representation into a decimal value of BOTH integer and fraction.
I learned from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) that I could convert a binary (consists of both integer and fraction) to decimal values.
To be specified, I am able to use this code quoted from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) , reproduced as below:
#include <stdio.h>
#include <math.h>
double convert(const char binary[]){
int bi,i;
int len = 0;
int dot = -1;
double result = 0;
for(bi = 0; binary[bi] != '\0'; bi++){
if(binary[bi] == '.'){
dot = bi;
}
len++;
}
if(dot == -1)
dot=len;
for(i = dot; i >= 0 ; i--){
if (binary[i] == '1'){
result += (double) pow(2,(dot-i-1));
}
}
for(i=dot; binary[i] != '\0'; i++){
if (binary[i] == '1'){
result += 1.0/(double) pow(2.0,(double)(i-dot));
}
}
return result;
}
int main()
{
char bin[] = "1101.11";
char bin1[] = "1101";
char bin2[] = "1101.";
char bin3[] = ".11";
printf("%s -> %f\n",bin, convert(bin));
printf("%s -> %f\n",bin1, convert(bin1));
printf("%s -> %f\n",bin2, convert(bin2));
printf("%s -> %f\n",bin3, convert(bin3));
return 0;
}
I am wondering if this code can be used to check for negative value? I did try with a binary string of 11111101.11 and it gives the output of 253.75...
I have two questions:
What are the modifications I need to do in order to read a negative value?
I know that I can do the bit shift (as below) to check if the msb is 1, if it is 1, I know it is negative value...
if (14bit_data & 0x2000) //if true, it is negative value
The issue is, since it involves fraction part (but not only integer), it confused me a bit if the method still works...
If the binary number is originally not in string format, is there any way I could convert it to string? The binary number is originally fed from a fpga block written in VHDL say, 14 bits, with msb as the sign bit, the following 6 bits are the magnitude for integer and the last 6 bits are the magnitude for fractional part. I need the decimal value in C code for Altera Nios II processor.
OK so I m focusing on the fact that you want to reuse the algorithm you mention at the beginning of your question and assume that the binary representation you have for your signed number is Two's complement but I`m not really sure according to your comments that the input you have is the same than the one used by the algorithm
First pad the 2 MSB to have a 16 bit representation
16bit_data = (14_bit_data & 0x2000) ? ( 14_bit_data | 0xC000) : 14_bit_data ;
In case value is positive then value will remained unchanged and if negative this will be the correct two`s complement representation on 16bits.
For fractionnal part everything is the same compared to algorithm you mentionned in your question.
For integer part everything is the same except the treatment of MSB.
For unsigned number MSB (ie bit[15]) represents pow(2,15-6) ( 6 is the width of frationnal part ) whereas for signed number in Two`s complement representation it represents -pow(2,15-6) meaning that algorithm become
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
dec = dec + rem * pow(2, t) * (9 != t ? 1 : -1);
++t;
}
or said differently if you don`t want * operator
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
if( 9 != t)
{
dec = dec + rem * pow(2, t);
}
else
{
dec = dec - rem * pow(2, t);
}
++t;
}
For the second algorithm that you mention, considering you format if dot == 11 and i == 0 we are at MSB ( 10 integer bits followed by dot) so the code become
for(i = dot - 1; i >= 0 ; i--)
{
if (binary[i] == '1')
{
if(11 != dot || i)
{
result += (double) pow(2,(dot-i-1));
}
else
{
// result -= (double) pow(2,(dot-i-1));
// Due to your number format i == 0 and dot == 11 so
result -= 512
}
}
}
WARNING : in brice algorithm the input is character string like "11011.101" whereas according to your description you have an integer input so I`m not sure that this algorithm is suited to your case
I think this should work:
float convert14BitsToFloat(int16_t in)
{
/* Sign-extend in, since it is 14 bits */
if (in & 0x2000) in |= 0xC000;
/* convert to float with 6 decimal places (64 = 2^6) */
return (float)in / 64.0f;
}
To convert any number to string, I would use sprintf. Be aware it may significantly increase the size of your application. If you don't need the float and what to keep a small application, you should make your own conversion function.

What is the format of the timestamp in InstallShield's ISString table?

I was recently trying to determine the answer to this question. The only post I was able to find on the topic was this old unanswered post on Flexera's website.
I wanted to know the answer to this question to incorporate in a tool for managing string translations. I already discovered the answer (my coworker and I spent the better half of our day trying to figure it out) but I thought I'd post the question/answer on Stack Overflow just in case someone else searches for it.
The answer is that the timestamp is a 32-bit integer with different bits representing different parts of the date.
Here's how it breaks down
Bits 1-5 : The Day of the Month [1-31] (end range could be 28-31 depending on month)
Bits 6-9 : The Month [1-12]
Bits 10-16: The Year after 1980 (only goes to year 2107) [0-127]
Bits 17-21: (?) Seconds rounded to even (only 5 bits so can only contain 31 values) [0-30]
Bits 22-27: Minutes [0-59]
Bits 28-32: Hours from 12 AM [0-23]
If the 32-bit integer is an invalid date it's evaluated to a default date Dec/30/1899 12:00 AM
Here is an example:
-------BINARY-32-bit-Integer----- | Decimal | Date String
DOM Month Year Seconds*2 Min Hour | |
00111 0111 0010000 00001 010000 00000 | 999295488 | Jul/07/1996 12:16 AM
7 7 16 1 16 0
Here is some C# code written to convert between DateTime and the string representation of the ISString timestamp (Small Disclaimer: this code doesn't currently handle invalid timestamp input):
private static int bitsPerDOM = 5;
private static int bitsPerMonth = 4;
private static int bitsPerYear = 7;
private static int bitsPerEvenSecond = 5;
private static int bitsPerMinute = 6;
private static int bitsPerHour = 5;
private static int startYear = 1980;
public static string getISTimestamp(DateTime date)
{
int[] shiftValues = { bitsPerDOM, bitsPerMonth, bitsPerYear, bitsPerEvenSecond, bitsPerMinute, bitsPerHour };
int[] dateValues = { date.Day, date.Month, date.Year -startYear, date.Second/2, date.Minute, date.Hour };
int shift = 32;
int dateInt = 0;
for (int i = 0; i < dateValues.Length; i++)
{
shift -= shiftValues[i];
dateInt |= (dateValues[i] << shift);
}
return dateInt.ToString();
}
public static DateTime getTimeFromISTimestampStr(string ISTimestampStr)
{
int timestampInt = Int32.Parse(ISTimestampStr);
int dom = getBits(timestampInt, 0, 4);
int month = getBits(timestampInt, 5, 8);
int year = startYear + getBits(timestampInt, 9, 15);
int seconds = getBits(timestampInt, 16, 20) * 2;
int minutes = getBits(timestampInt, 21, 26);
int hours = getBits(timestampInt, 27, 31);
return new DateTime(year, month, dom, hours, minutes, seconds);
}
private static int getBits(int n, int start, int end)
{
//Clear left bits by shifting
n <<= start;
n >>= 31 + start - end; //Shift to the right
return n;
}

What exactly does a Sample Rate of 44100 sample?

I'm using FMOD library to extract PCM from an MP3. I get the whole 2 channel - 16 bit thing, and I also get that a sample rate of 44100hz is 44,100 samples of "sound" in 1 second. What I don't get is, what exactly does the 16 bit value represent. I know how to plot coordinates on an xy axis, but what am I plotting? The y axis represents time, the x axis represents what? Sound level? Is that the same as amplitude? How do I determine the different sounds that compose this value. I mean, how do I get a spectrum from a 16 bit number.
This may be a separate question, but it's actually what I really need answered: How do I get the amplitude at every 25 milliseconds? Do I take 44,100 values, divide by 40 (40 * 0.025 seconds = 1 sec) ? That gives 1102.5 samples; so would I feed 1102 values into a blackbox that gives me the amplitude for that moment in time?
Edited original post to add code I plan to test soon: (note, I changed the frame rate from 25 ms to 40 ms)
// 44100 / 25 frames = 1764 samples per frame -> 1764 * 2 channels * 2 bytes [16 bit sample] = 7056 bytes
private const int CHUNKSIZE = 7056;
uint bytesread = 0;
var squares = new double[CHUNKSIZE / 4];
const double scale = 1.0d / 32768.0d;
do
{
result = sound.readData(data, CHUNKSIZE, ref read);
Marshal.Copy(data, buffer, 0, CHUNKSIZE);
//PCM samples are 16 bit little endian
Array.Reverse(buffer);
for (var i = 0; i < buffer.Length; i += 4)
{
var avg = scale * (Math.Abs((double)BitConverter.ToInt16(buffer, i)) + Math.Abs((double)BitConverter.ToInt16(buffer, i + 2))) / 2.0d;
squares[i >> 2] = avg * avg;
}
var rmsAmplitude = ((int)(Math.Floor(Math.Sqrt(squares.Average()) * 32768.0d))).ToString("X2");
fs.Write(buffer, 0, (int) read);
bytesread += read;
statusBar.Text = "writing " + bytesread + " bytes of " + length + " to output.raw";
} while (result == FMOD.RESULT.OK && read == CHUNKSIZE);
After loading mp3, seems my rmsAmplitude is in the range 3C00 to 4900. Have I done something wrong? I was expecting a wider spread.
Yes, a sample represents amplitude (at that point in time).
To get a spectrum, you typically convert it from the time domain to the frequency domain.
Last Q: Multiple approaches are used - You may want the RMS.
Generally, the x axis is the time value and y axis is the amplitude. To get the frequency, you need to take the Fourier transform of the data (most likely using the Fast Fourier Transform [fft] algorithm).
To use one of the simplest "sounds", let's assume you have a single frequency noise with frequency f. This is represented (in the amplitude/time domain) as y = sin(2 * pi * x / f).
If you convert that into the frequency domain, you just end up with Frequency = f.
Each sample represents the voltage of the analog signal at a given time.

Resources