How to validate local VM clock with NTP on Windows Azure? - azure

NTP (Network Time Protocol) is basically the de-facto standard to adjust setup server clocks. I have already raised a question about the expectations in terms of native clock accuracy on Windows Azure. Here comes a slightly different one: how I can validate the current clock reliability with NTP? The catch is that UDP is not available on Windows Azure (only TCP), and it seems there is no TCP implementation available of NTP (although the discussion is nearly one decade old).
Any take?

Assuming that UDP outgoing packets are still blocked by Azure (I'm surprised/disappointed this is still the case!) then maybe you could drop down to a TCP service with less resolution such as TIME or DAYTIME - see descriptions of both on http://www.nist.gov/pml/div688/grp40/its.cfm - you would obviously need to measure the length of time your network call took in order to be sure the answer coming back is sufficiently accurate for you.

Joannes and Stuart: You are correct that Windows Azure roles (Web, Worker, and VM Roles) do not support hosting of UDP endpoints currently. However, NTP support is already included by default on Windows Azure role VMs, currently configured by default to synch the clock against server time.windows.com once a week (evidence here - search for "time service").
You can tweak a registry setting in a Startup Task if a weekly sync is not frequent enough.
HTH!

I'm a bit surprised by your answer about udp while i'm actually connect to NTP server from my azure web role to serve our JS client synchronization.
This is working fine...
Note the azure web role time is a lot different thant the NTP one ( actually 30s ahead !! ). However, the NTP time is nearly the same as my local machine synchronized with time.microsoft.com
{"network":"2013-07-16T18:18:25.9558581Z","server":"2013-07-16T18:18:52.5415999Z"}
Here the code i use :
static uint SwapEndianness(ulong x)
{
return (uint)(((x & 0x000000ff) << 24) +
((x & 0x0000ff00) << 8) +
((x & 0x00ff0000) >> 8) +
((x & 0xff000000) >> 24));
}
static DateTime Update(string server)
{
// NTP message size - 16 bytes of the digest (RFC 2030)
var ntpData = new byte[48];
//Setting the Leap Indicator, Version Number and Mode values
ntpData[0] = 0x1B; //LI = 0 (no warning), VN = 3 (IPv4 only), Mode = 3 (Client Mode)
var addresses = Dns.GetHostEntry(server).AddressList;
//The UDP port number assigned to NTP is 123
var ipEndPoint = new IPEndPoint(addresses[0], NTPPort);
//NTP uses UDP
var socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Connect(ipEndPoint);
socket.Send(ntpData);
DateTime l_now = DateTime.UtcNow;
socket.Receive(ntpData);
socket.Close();
//Offset to get to the "Transmit Timestamp" field (time at which the reply
//departed the server for the client, in 64-bit timestamp format."
const byte serverReplyTime = 40;
//Get the seconds part
ulong intPart = BitConverter.ToUInt32(ntpData, serverReplyTime);
//Get the seconds fraction
ulong fractPart = BitConverter.ToUInt32(ntpData, serverReplyTime + 4);
//Convert From big-endian to little-endian
intPart = SwapEndianness(intPart);
fractPart = SwapEndianness(fractPart);
var milliseconds = (intPart * 1000) + ((fractPart * 1000) / 0x100000000L);
//**UTC** time
var l_networkTime = (new DateTime(_epocBaseTicks, DateTimeKind.Utc)).AddMilliseconds((long)milliseconds);
_networkTimeDelta = l_networkTime.Ticks - l_now.Ticks ;
return l_networkTime;
}
Hope this help.

Related

Implementing reliability in UDP (python)

I have written the code for transferring an audio file from client to server using udp (python).
Now I am required to introduce reliability in the codes of UDP. The instructions are given as:
"You will be required to implement following to make UDP reliable:
(a) Sequence and acknowledge numbers
(b) Re-transmission (selective repeat)
(c) Window size of 5-10 UDP segments (stop n wait)
(d) Re ordering on receiver side "
THE SENDER THAT IS CLIENT CODE IS GIVEN BELOW
from socket import *
import time
# Assigning server IP and server port
serverName = "127.0.0.1"
serverPort = 5000
# Setting buffer length
buffer_length = 500
# Assigning the audio file a name
my_audio_file = r"C:\Users\mali.bee17seecs\PycharmProjects\TestProject\Aye_Rah-e-Haq_Ke_Shaheedo.mp3"
clientSocket = socket(AF_INET, SOCK_DGRAM)
# Opening the audio file
f = open(my_audio_file, "rb")
# Reading the buffer length in data
data = f.read(buffer_length)
# While loop for the transfer of file
while data:
if clientSocket.sendto(data, (serverName, serverPort)):
data = f.read(buffer_length)
time.sleep(0.02) # waiting for 0.02 seconds
clientSocket.close()
f.close()
print("File has been Transferred")
THE RECEIVER THAT IS SERVER CODE IS GIVEN BELOW
from socket import *
import select
# Assigning server IP and server port
serverName = "127.0.0.1"
serverPort = 5000
# Setting timeout
timeout = 3
serverSocket = socket(AF_INET, SOCK_DGRAM)
serverSocket.bind((serverName, serverPort))
# While loop for the receiving of file
while True:
data, serverAddress = serverSocket.recvfrom(1024)
if data:
file = open(r"C:\Users\mali.bee17seecs\PycharmProjects\TestProject\Aye_Rah-e-Haq_Ke_Shaheedo.mp3",
"wb")
while True:
ready = select.select([serverSocket], [], [], timeout)
if ready[0]:
data, serverAddress = serverSocket.recvfrom(500)
file.write(data)
else:
file.close()
print("File has been Received")
break
Before answer each request, you should know that we build a reliable UDP by adding some specific infomation before the real content, which you can think as a application layer head. We use them to do some control or collect infomation like TCP does in traffic layer by the head part. It may look like below:
struct Head {
int seq;
int size;
}
(a) Sequence and acknowledge numbers
If you're familar with TCP, it is not hard. You can set seq and when the other side receive it, the controller will judge it and to check if we need to do b/d.
(b) Re-transmission (selective repeat) & (d) Reordering on receiver side
They are familiar to realise, using GBN/ARQ/SACK algorithm to do retransmission, using some simple algorithm like sorting to do reording.
(c) Window size of 5-10 UDP segments (stop n wait)
This part need to do some thing like traffic control that TCP does. I don't how complex you want to do, it's can be really complex or simple, it depends on you.

Timing issue in windows rather than linux

I have the following function from a colleague who was previously working for the company and the comments are self explanatory, the problem is I'm right now using windows, and there issues with the synchornization with the device.
Would someone address or know a solution in windows for sync with a device ?
def sync_time(self):
"""Sync time on SmartScan."""
# On a SmartScan time can be set only by the precision of seconds
# So we need to wait for the next full second until we can send
# the packet on it's way to the scanner.
# It's not perfect, but the error should be more or less constant.
message = Maint()
message.state = message.OP_NO_CHANGE
now = datetime.datetime.utcnow()
epoch = datetime.datetime(1970, 1, 1)
# int and datetime objects
seconds = int((now - epoch).total_seconds()) + 1 # + sync second
utctime = datetime.datetime.utcfromtimestamp(seconds)
# wait until next full second
# works only on Linux with good accuracy
# Windows needs another approach
time.sleep((utctime - datetime.datetime.utcnow()).total_seconds())
command = MaintRfc()
command.command = command.SET_CLOCK
command.data = (seconds, )
message.add_message(command)
self._handler.sendto(message)
LOG.debug("Time set to: %d = %s", seconds, utctime)

How to check if PySerial port.write() has written anything?

I have posted several question regarding a similar problem, here and here but after thinking and troubleshooting over the weekend, I'd like to explore more possible cause, but lack the knowledge.
How should I check ion my current code is writing anything to the sensor I have connected via USB-RS232. Here is my code.
port = serial.Serial("/dev/ttyUSB0", baudrate=9600, timeout=20, bytesize=8, rtscts =1,dsrdtr=1)
f_w = open('/home/ryan/python_serial_output.txt','r+')
f_o = open('/home/ryan/python_serial_parse.txt','w')
port.send_break()
sys_reply = port.read(100000)
sys_reply_str = sys_reply.decode('cp437')
print(sys_reply_str)
sys_reply_str_haha = sys_reply_str.replace("\r","")
sys_reply_str_haha = sys_reply_str_haha.replace("\n","")
i = list(sys_reply_str_haha)
if str(i[-1]) == '>':
port.reset_input_buffer()
print("ip_b_reset")
port.reset_output_buffer()
print("op_b_reset")
print(port.writable())
ip = 'CR1'
ip_en = ip.encode('cp437')
port.write(ip_en)
read_syscheck = port.read(1000)
read_syscheck_str = read_syscheck.decode('cp437')
print(read_syscheck_str)
With the same decoding format I tried writing to the port and should receive back a reply
>CR1
*[Parameters set to FACTORY defaults]*
Instead, I got
Sensor 2009
All rights reserved.
Firmware Version: 34.11
>
ip_b_reset
op_b_reset
True
CR1
7F7FD30000000030C6A87F9978B9302....
and that is why I conclude I might not be writing to the sensor.
PS: The data stream 7F7FD30000000030C6A87F9978B9302.... is always present and would appear when timed out. Hence I am unsure if this is the start of an actual reading? Or just a random readout.

Merged Socket->recv with perl on Linux

Sorry for the bad English, it is not my mother tongue.
I am new to perl programming and I'm facing a tedious problem for some hours now.
I have coded a simple Client-Server using IO::Socket::INET. It works flawlessly on Windows, but is broken on Linux.
On Linux, the first recv get both of server's communication and therefor, the second one is waiting endlessly for a communication.
Running the command "perl -version" gives me this result on Windows:
This is perl 5, version 20, subversion 2 (v5.20.2) built for
MSWin32-x64-multi-t hread (with 1 registered patch, see perl -V for
more detail)
And on Linux :
This is perl 5, version 20, subversion 2 (v5.20.2) built for
x86_64-linux-gnu-thread-multi (with 42 registered patches, see perl -V
for more detail)
Here is and exemple of a server:
use IO::Socket;
my $socket= IO::Socket::INET->new( Proto => "tcp",
LocalPort => 2559,
Listen => SOMAXCONN,
Reuse => 1);
while(1)
{
print "Waiting for a client\n";
my $client = $socket->accept();
$client->send("Hello, please connect yourself");
$client->send("Username:");
$client->recv(my $username, 1024);
$client->send("Password:");
$client->recv(my $cipheredpassword, 1024);
$client->send("Thank you, Goodbye.");
$client->close();
print "Connection closed\n";
}
And here is an exemple of a client :
use IO::Socket;
use Digest::MD5 qw(md5_hex);
my $username = "";
my $password = "";
my $server = IO::Socket::INET->new( Proto => "tcp",
PeerAddr => "localhost",
PeerPort => 2559);
# Pick up both $firstServerMessage and
# $serverAskUsernameMessage on Linux
$server->recv(my $firstServerMessage, 1024);
print "$firstServerMessage\n";
# Hangs on Linux
$server->recv(my $serverAskUsernameMessage, 1024);
while($username eq "")
{
print "$serverAskUsernameMessage\n";
chomp($username = <STDIN>);
}
$server->send($username);
$server->recv(my $serverAskPasswordMessage, 1024);
while($password eq "")
{
print "$serverAskPasswordMessage\n";
chomp($password = <STDIN>);
}
my $hashedPassword = md5_hex($password);
$server->send($hashedPassword);
$server->recv(my $lastServerMessage, 1024);
print $lastServerMessage;
I know that the easy solution to this problem would be to avoid having multiple ->recv in a row, but I'm also curious to know why it is not working on Linux.
I have tried to use ->flush and ->autoflush(1), without success.
Your help and knowledge would be appreciated,
L.C.
This problem has nothing to do with the choice of operating system, or indeed the language. The problem relates to the fact of how you are using TCP.
A TCP stream is just that - a stream of bytes. It does not matter to TCP how you write those bytes - you could send in fifty 1-byte chunks, or one 50-byte, or anything inbetween. The TCP stream simply represents the bytes, without message boundaries. It's much the same as file IO - a file on disk doesn't remember the distinction between write calls, only the sum total of bytes that were transferred.
So in your server when you do
$client->send("Hello, please connect yourself");
$client->send("Username:");
It could all get merged in one segment over the wire, and will arrive in one go at the other end. This is why any TCP-based protocol provides some form of framing - be it linefeeds, or message headers that declare the size of the body, or whatever. Some way that the receiver can pull the stream apart again.
For example, you might decide to use "\n" as a message boundary. You can then send using
$client->send("Hello, please connect yourself\n");
$client->send("Username:\n");
and the receiver can use the regular readline to read these lines back out again.

Retrieveing video/audio duration using WindowsAPICodecPack and Shell

I have the following code to retrieve the duration of a video uploaded:
HttpPostedFileBase postedFileCopy = postedFile;
postedFileCopy.InputStream.Position = 0;
Stream stream = postedFile.InputStream;
LocalResource tempDirectory = RoleEnvironment.GetLocalResource("TempZipDirectory");
postedFile.SaveAs(tempDirectory.RootPath + #"\" + postedFile.FileName);
ShellFile so = ShellFile.FromFilePath(tempDirectory.RootPath + #"\" + postedFile.FileName);
string durationString;
double nanoseconds;
double.TryParse(so.Properties.System.Media.Duration.Value.ToString(),out nanoseconds);
if (nanoseconds > 0)
{
int totalSeconds = (int)Math.Round((nanoseconds / 10000000), 0);
int seconds = totalSeconds % 60;
int minutes = totalSeconds / 60;
int hour = totalSeconds / (60 * 60);
durationString = "" + hour + ":" + minutes + ":" + seconds;
}
else
{
System.Diagnostics.EventLog.WriteEntry("Application", "BLANK DURATION STRING", System.Diagnostics.EventLogEntryType.Error);
durationString = "00:00:00";
}
This works as expected on localhost but when put up to Azure storage it does not seem to be able to retrieve the details of the file. The
postedFile.SaveAs(tempDirectory.RootPath + #"\" + postedFile.FileName);
saves the upload to the directory so i can grab these details but no matter what i try I cant seem to get the nanoseconds returned when the storage is on azure. This is a deployed MVC application and the tempdirectory is stored on the C:/ drive of the server.
The code you refer is using native (Shell) functions. As AzureWebSites is running as a high-density shared environment, your code runs into non-full-trust mode. Calls to native functions is restricted in Azure Web Sites, even if you scale to reserved mode instances.
UPDATE
The only way to get application executed under FULL TRUST is to use Web Role (for web projects) or Worker Role (for background tasks).
Read more about cloud services here.

Resources