Why calling times.cpuTime() is slow in Nim? - nim-lang

In c++, std::chrono::steady_clock::now().time_since_epoch().count() takes about 20 nanos on my machine, which is close to rdtsc instruction. But in Nim, times.cpuTime (it calls clock_gettime in Linux) takes about 350ns. Any body knows why?
proc toNanos(sec: float): int64 =
let SecInNano = 1000000000f64
return (sec * SecInNano).int64
proc testTimes() =
let N = 1000000
var ts : seq[float]
ts.setLen(N+1)
for i in 0..N :
ts[i]= cpuTime()
var dur = toNanos((ts[N-1] - ts[0]) / N.float)
for i in 0..<N :
ts[i] = ts[i+1] - ts[i]
discard ts.pop
ts.sort()
echo fmt"---------- latency of invoking cpuTime (ns): nSamples: ", N, " ----------------"
echo fmt" Avg Min 50% 90% Max"
echo fmt" {dur} {ts[0].toNanos} {ts[N div 2].toNanos} {ts[int(N.float*0.9)].toNanos} {ts[N-1].toNanos}"

I think the reason is that std::chrono::steady_clock::now().time_since_epoch().count() returns an int. Whereas times.cpuTime() converts the result to a float.
Using monotimes.getMonoTime() which returns an int, I get times of about 40ns.
import std/[algorithm, strformat, monotimes]
proc testTimes() =
const N = 1000000
var ts = newSeq[int64](N+1)
for i in 0..N:
ts[i] = getMonoTime().ticks
var dur = int(float(ts[N-1] - ts[0]) / float(N))
for i in 0..<N:
ts[i] = ts[i+1] - ts[i]
discard ts.pop
ts.sort()
echo fmt"---------- latency of invoking cpuTime (ns): nSamples: ", N, " ----------------"
echo fmt" Avg Min 50% 90% Max"
echo fmt" {dur} {ts[0]} {ts[N div 2]} {ts[int(N.float*0.9)]} {ts[N-1]}"
testTimes()
BTW: The next time please post a complete runnable example, it simplifies testing.

Related

Groovy not converting seconds to hours properly

I have an integer that is:
19045800
I tried different code:
def c = Calendar.instance
c.clear()
c.set(Calendar.SECOND, 19045800)
echo c.format('HH:mm:ss').toString()
String timestamp = new GregorianCalendar( 0, 0, 0, 0, 0, 19045800, 0 ).time.format( 'HH:mm:ss' )
echo timestamp
Both return 10:30:00
19045800 seconds is supposed to be over 5000 hours. What am I doing wrong?
I'm not sure what you are looking for. But if your requirement is to calculate the number of hours, minutes, and the remainder of seconds for given seconds following code will work.
def timeInSeconds = 19045800
int hours = timeInSeconds/3600
int minutes = (timeInSeconds%3600)/60
int seconds = ((timeInSeconds%3600)%60)
println("Hours: " + hours)
println("Minutes: " + minutes)
println("Seconds: " + seconds)
You're attempting to use a Calendar, but what you're actually discussing is what Java calls a Duration – a length of time in a particular measurement unit.
import java.time.*
def dur = Duration.ofSeconds(19045800)
def hours = dur.toHours()
def minutes = dur.minusHours(hours).toMinutes()
def seconds = dur.minusHours(hours).minusMinutes(minutes).toSeconds()
println "hours = ${hours}"
println "minutes = ${minutes}"
println "seconds = ${seconds}"
prints:
hours = 5290
minutes = 30
seconds = 0

What is the impact of a function on execution speed?

Fairly new to python and working on an application where speed is critical - essentially in a race to book reservations online as close to 7AM as possible. Was trying to understand the impact on execution speed that a function call has, so i developed a very simple test: count to 1,000,000 and time start and end using both in line and a called function. However, the result i get says that the function takes about 56% of the time that the inline code takes. I must be really be missing something. Can anyone explain this please.
Code:
import time
def timeit():
temp = 0
while temp < 10000000:
temp += 1
return
time1 = time.time()
temp = 0
while temp < 10000000:
temp += 1
time2 = time.time()
timeit()
time3 = time.time()
temp = 0
while temp < 10000000:
temp += 1
time4 = time.time()
timeit()
time5 = time.time()
print("direct took " + "{:.16f}".format(time2 - time1) + " seconds")
print("indirect took " + "{:.16f}".format(time3 - time2) + " seconds")
print("2nd direct took " + "{:.16f}".format(time4 - time3) + " seconds")
print("2nd indirect took " + "{:.16f}".format(time5 - time4) + " seconds")
results:
direct took 1.6094279289245605 seconds
indirect took 0.9220039844512939 seconds
2nd direct took 1.5781939029693604 seconds
2nd indirect took 0.9375154972076416 seconds
I apologize for missing something silly that i must have done?

Fortran error: Program received signal SIGSEGV: Segmentation fault - invalid memory reference

I'm try to run an ocean temperature model for 25 years using the explicit method (parabolic differential equation).
If I run for a year a = 3600 or five years a = 18000 it works fine.
However, when I run it for 25 years a = 90000 it crashes.
a is the amount of time steps used. And a year is considered to be 360 days. The time step is 4320 seconds, delta_t = 4320..
Here is my code:
program task
!declare the variables
implicit none
! initial conditions
real,parameter :: initial_temp = 4.
! vertical resolution (delta_z) [m], vertical diffusion coefficient (av) [m^2/s], time step delta_t [s]
real,parameter :: delta_z = 2., av = 2.0E-04, delta_t = 4320.
! gamma
real,parameter :: y = (av * delta_t) / (delta_z**2)
! horizontal resolution (time) total points
integer,parameter :: a = 18000
!declaring vertical resolution
integer,parameter :: k = 101
! declaring pi
real, parameter :: pi = 4.0*atan(1.0)
! t = time [s], temp_a = temperature at upper boundary [°C]
real,dimension(0:a) :: t
real,dimension(0:a) :: temp_a
real,dimension(0:a,0:k) :: temp
integer :: i
integer :: n
integer :: j
t(0) = 0
do i = 1,a
t(i) = t(i-1) + delta_t
end do
! temperature of upper boundary
temp_a = 12. + 6. * sin((2. * t * pi) / 31104000.)
temp(:,0) = temp_a(:)
temp(0,1:k) = 4.
! Vertical resolution
do j = 1,a
do n = 1,k
temp(j,n) = temp(j-1,n) + (y * (temp(j-1,n+1) - (2. * temp(j-1,n)) + temp(j-1,n-1)))
end do
temp(:,101) = temp(:,100)
end do
print *, temp(:,:)
end program task
The variable a is on line 11 (integer,parameter :: a = 18000)
As said, a = 18000 works, a = 90000 doesn't.
At 90000 get I get:
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
RUN FAILED (exit value 1, total time: 15s)
I'm using a fortran on windows 8.1, NetBeans and Cygwin (which has gfortran built in).
I'm not sure if this problem is caused through bad compiler or anything else.
Does anybody have any ideas to this? It would help me a lot!
Regards
Take a look at the following lines from your code:
integer,parameter :: k = 101
real,dimension(0:a,0:k) :: temp
integer :: n
do n = 1,k
temp(j,n) = temp(j-1,n) + (y * (temp(j-1,n+1) - (2. * temp(j-1,n)) + temp(j-1,n-1)))
end do
Your array temp has bounds of 0:101, you loop n from 1 to 101 where in iteration n=101 you access temp(j-1,102), which is out of bounds.
This means you are writing to whatever memory lies beyond temp and while this makes your program always incorrect, it is only causing a crash sometimes which depends on various other things. Increasing a triggers this because column major ordering of your array means k changes contiguously and is strided by a, and as a increases your out of bounds access of the second dimension is further in memory beyond temp changing what is getting overwritten by your invalid access.
After your loop you set temp(:,101) = temp(:,100) meaning there is no need to calculate temp(:,101) in the above loop, so you can change its loop bounds from
do n = 1,k
to
do n = 1, k-1
which will fix the out of bounds access on temp.

Microsoft WAV extracting frequency from sample

Okay, so I'm trying to play a WAV file in Lua. I've gotton as far as the information, but I'm getting stuck on playing the actual song. The function I'm using is Speaker.start(Channel, Frequency). I'm new to doing anything like this in Lua from a raw file, and I don't know what the sample data represents. My question is, how would I get the Channel and Frequency to play it? Is it even possible?
--(( Variables ))--
local FileName = "song.wav"
local File = fs.open(FileName, "rb")
local ToHex = "%X"
local Speaker = peripheral.wrap("back")
--(( Functions ))--
-- returns a HEX string
local function BigEndian(Size)
local Str = ""
for Count = 1,Size do
Str = Str .. string.char(File.read())
end
return Str
end
-- returns a HEX string
local function LittleEndian(Size)
local T = {}
for Count = 1,Size do
table.insert(T,ToHex:format(File.read()))
end
local Str = ""
for Count = #T,1,-1 do
Str = Str .. T[Count]
end
return Str
end
--(( Main program ))--
-- Variables
local ChunkID = ""
local ChunkSize = 0
local Format = ""
local Subchunk1ID = ""
local Subchunk1Size = 0
local AudioFormat = 0
local NumChannels = 0
local SampleRate = 0
local ByteRate = 0
local BlockAlign = 0
local BitsPerSample = 0
local Subchunk2ID = ""
local Subchunk2Size = 0
local ExtraPeramSize = 0
-- RIFF chunk
ChunkID = BigEndian(4)
ChunkSize = tonumber(LittleEndian(4), 16) + 8
Format = BigEndian(4)
-- Subchunk 1
Subchunk1ID = BigEndian(4)
Subchunk1Size = tonumber(LittleEndian(4), 16)
AudioFormat = tonumber(LittleEndian(2), 16)
NumChannels = tonumber(LittleEndian(2), 16)
SampleRate = tonumber(LittleEndian(4), 16)
ByteRate = tonumber(LittleEndian(4), 16)
BlockAlign = tonumber(LittleEndian(2), 16)
BitsPerSample = tonumber(LittleEndian(2), 16)
ExtraPeramSize = tonumber(LittleEndian(2), 16)
-- Subchunk 2
Subchunk2ID = BigEndian(4)
Subchunk2Size = tonumber(LittleEndian(4), 16)
-- Printing
print("RIFF chunk")
print("- ChunkID: " .. ChunkID)
print("- ChunkSize: " .. ChunkSize)
print("- Format: " .. Format)
print("Subchunk 1")
print("- ID: " .. Subchunk1ID)
print("- Size: " .. Subchunk1Size)
print("- Audio Format: " .. AudioFormat)
print("- NumChannels: " .. NumChannels)
print("- Sample Rate: " .. SampleRate)
print("- Byte Rate: " .. ByteRate)
print("- Block Align: " .. BlockAlign)
print("- BitsPerSample: " .. BitsPerSample)
print("Subchunk 2")
print("- ID: " .. Subchunk2ID)
print("- Size: ".. Subchunk2Size)
local Done = 0
while true do
Done = Done + 1 -- Left Right Left Right
--local Sample = {{tonumber(LittleEndian(1),16), tonumber(LittleEndian(1),16)}, {tonumber(LittleEndian(1),16), tonumber(LittleEndian(1),16)}}
local Left = tonumber(LittleEndian(2),16) - 32768
local Right = tonumber(LittleEndian(2),16)
local Average = (Left + Right)/2
Speaker.start(0,Average)
sleep(0)
-- Left channel, Right channel
if Done == 5000 then break end
end
Speaker.stop(0)
--(( EOF ))--
WAV files store PCM sample data, which is in the time domain. Many times a second (44,100 for CD audio) a sample is take of the pressure level at that point in time, and quantised to fit a given bit depth. When you play back these samples, they approximate the original waveform.
Image from Wikipedia PCM article
What you are asking for is a sample in the frequency domain. Samples here are taken at much larger intervals (around 5-10ms apart) and contain the spectrum information that makes up the sound. That is, you may have 2048 "buckets" measuring the amount of sound at a particular frequency for that chunk of time. This is measured by doing a Fourier transform (commonly implemented as FFT on computers) of the original time domain sampled waveform.
Basically, you can't playback WAVs using the API you are using now, as the format is fundamentally different.

Weird behavior with process.hrtime()

If I execute this code:
//brute force
console.log('------------------');
console.log('Brute Force Method');
console.log('------------------');
var aTimer = process.hrtime();
var sum = 0;
for (var x = 3 ; x < 1000 ; x++) {
if (x % 3 === 0 || x % 5 === 0) {
sum += x;
}
}
console.log('The sum of them is: '+ sum);
//console.log(aTimer);
var aTimerDiff = process.hrtime(aTimer);
console.log('Benchmark took %d nanoseconds.', aTimerDiff[0] * 1e9 + aTimerDiff[1]);
//arithmetic method
console.log('------------------');
console.log('Arithmetic Method');
console.log('------------------');
var bTimer = process.hrtime();
var term3 = parseInt(999/3);
var threes = 3 * term3 * (term3 + 1) / 2;
var term5 = parseInt(999/5);
var fives = 5 * term5 * (term5 + 1) / 2;
var term15 = parseInt(999/15);
var fifteens = 15 * term15 * (term15 + 1) / 2;
console.log('The sum of them is: '+ (threes + fives - fifteens));
//console.log(bTimer);
var bTimerDiff = process.hrtime(bTimer);
console.log('Benchmark took %d nanoseconds.', bTimerDiff[0] * 1e9 + bTimerDiff[1]);
console.log('------------------');
console.log('Which is Faster');
console.log('------------------');
if (bTimerNano > aTimerNano) {
console.log('A is %d nanoseconds faster than B.', bTimerNano - aTimerNano)
}
else {
console.log('B is %d nanoseconds faster than A.', aTimerNano - bTimerNano)
}
The result is:
------------------
Brute Force Method
------------------
The sum of them is: 233168
Benchmark took 64539 nanoseconds.
------------------
Arithmetic Method
------------------
The sum of them is: 233168
Benchmark took 155719 nanoseconds.
------------------
Which is Faster
------------------
A is 91180 nanoseconds faster than B.
That can't be right... arithmetic should be faster. So I uncomment these lines to get a look:
console.log(aTimer);
console.log(bTimer);
And the results look accurate now.
------------------
Brute Force Method
------------------
The sum of them is: 233168
[ 1697962, 721676140 ]
Benchmark took 1642444 nanoseconds.
------------------
Arithmetic Method
------------------
The sum of them is: 233168
[ 1697962, 723573374 ]
Benchmark took 284646 nanoseconds.
------------------
Which is Faster
------------------
B is 1357798 nanoseconds faster than A.
Then I comment those lines out again and I get the same funky results.
What could cause this to happen? Am I missing something about process.hrtime()?
$ node -v
v0.10.0
Edit: I just tested this with v0.8.11 and get the same kind of behavior.
Running the test cases completely separately returns these results:
Brute Force: [ 0, 32414 ]
Arithmetic Method: [ 0, 123523 ]
Those results are pretty consistent between runs, so I'm thinking that your initial assumption that the arithmetic method should be faster is actually wrong (EDIT: the reason for that seems to be the use of parseInt()).
The reason you get different results with a console.log in between is caused by that console.log itself probably.

Resources