I can't get the hang of rounding in micropython
a=round(86.86, 1)
print (a)
86.90001
surely there must be a way to limit to one dp & get it to round up?
Related
I have a list of SECONDS.MICROSECONDS CLOCK_MONOTONIC timestamps like those below:
5795.944152
5795.952708
5795.952708
5795.960820
5795.960820
5795.969092
5795.969092
5795.977502
5795.977502
5795.986061
5795.986061
5795.994075
5795.994075
5796.002382
5796.002382
5796.010860
5796.010860
5796.019241
5796.019241
5796.027452
5796.027452
5796.035709
5796.035709
5796.044158
5796.044158
5796.052453
5796.052453
5796.060785
5796.060785
5796.069053
They each represent a particular action to be made.
What I need to do, in python preferably, but the programming language doesn't really matter, is to speed up the actions - something like being able to do a 2X, 3X, etc., speed increment on this list. So those values need to decrease in such a way to match the speed incrementation of ?X.
I thought of dividing each timestamp with the speed number I want, but it seems it doesn't work this way.
As described and suggested by #RobertDodier I have managed to find a quick and simple solution to my issue:
speed = 2
speedtimestamps = [(t0 + (t - t0)/speed for t in timestamps]
Just make sure to remove the first line containing the first t0 timestamp.
when i measure the time of selection sort
with random array size of 10000 in random number range of 1000
it gives me big time like 14 sec when the size is 1000000 it gives me 1 min i think it supposed to be less than 5 sec
can you help me with the algorithm to lower the time
def selection_sort(selection_random_array):
for i in range(len(selection_array) - 1):
minimum_index = i
for j in range(i + 1, len(selection_array)):
if selection_array[j] < selection_array[minimum_index]:
minimum_index = j
selection_array[i], selection_array[minimum_index] = selection_array[minimum_index], selection_array[i]
return selection_array
print("--------selection_sort----------")
start1 = time.time()
selection_sort(selection_random_array)
end1 = time.time()
print(f"random array: {end1 - start1}")
You seem to be asking two questions: how to improve selection sort and how to time it exactly.
The short answer for both is: you can't. If you modify the sorting algorithm it is no longer selection sort. If that's okay, the industry standard is quicksort, so take a look at that algorithm (it's much more complicated, but runs in O(n log n) time instead of selection sort's O(n^2) time.
As for your other question, "how do I time it exactly", you also can't. Computers don't handle only one thing anymore. Your operating system is constantly threading tasks in between each other. There is a 0% chance that your CPU is dedicated entirely to this program while it runs. What does that mean? It means that the time it takes for the program to finish will change each time you run it. Beyond that, the time it takes to call time.time() will need to be taken into account.
In a jupyter notebook, the bottom code gives an asterick such that the kernel can't run and needs to restart. I find no other way to have it compute what its asking for in the problem. Is there another less computational way so as to not give an asterick (too much work for the kernel)?
The powers of 2 (20=1, 21=2, 22=4, etc) arise frequently in computer
science. (For example, you may have noticed that storage on
smartphones or USBs come in powers of 2, like 16 GB, 32 GB, or 64 GB.)
Use np.arange and the exponentiation operator ** to compute the first
30 powers of 2, starting from 2^0.
import numpy as np
powers_of_2 = np.arange(2**0,2**30,2**1)
powers_of_2
You have to compute 2 to the power of an array using the code below.
powers_of_2 = 2 ** np.arange(15)
You can use
2**np.arange(0,30,1)
I have been trying to find a nice way to format a timestamp of the current date & time with milliseconds using Python 3's native time library.
However there's no directive for milliseconds in the standard documentation https://docs.python.org/3/library/time.html#time.strftime.
There's undocumented directives though, like %s which will give the unix timestamp. Is there any other directives like this?
Code example:
>>> import time
>>> time.strftime('%Y-%m-%d %H:%M:%S %s')
'2017-08-28 09:27:04 1503912424'
Ideally I would just want to change the trailing %s to some directive for milliseconds.
I'm using Python 3.x.
I'm fully aware that it's quite simple to get milliseconds using the native datetime library, however I'm looking for a solution using the native time library solely.
If you insist on using time only:
miliSeconds = time.time()%1*1000
time() returns accurately the time since the epoch. Since you already have the the date up to a second, you don't really care this is a time delta, since the remaining fraction is what you need to add anyway to what you have already to get the accurate date. %1 retrieves the fraction and then I convert the numbers to millisecond by multiplying by 1000.
note
Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second. While this function normally returns non-decreasing values, it
can return a lower value than a previous call if the system clock has
been set back between the two calls.
Taken from https://docs.python.org/3/library/time.html#time.time. But this means there is no way to do what you want. You may be able to do something more robust with process_time, but that would have to be elaborate.
I am interested in knowing correlation in points between 0 to 2km on a linear network. I am using the following statement for empirical data, this is solved in 2 minutes.
obs<-linearK(c, r=seq(0,2,by=0.20))
Now I want to check the acceptance of Randomness, so I used envelopes for the same r range.
acceptance_enve<-envelope(c, linearK, nsim=19, fix.n = TRUE, funargs = list(r=seq(0,2,by=0.20)))
But this show estimated time to be little less than 3 hours. I just want to ask if this large time difference is normal. Am I correct in my syntax to the function call of envelope its extra arguments for r as a sequence?
Is there some efficient way to shorten this 3 hour execution time for envelopes?
I have a road network of whole city, so it is quite large and I have checked that there are no disconnected subgraphs.
c
Point pattern on linear network
96 points
Linear network with 13954 vertices and 19421 lines
Enclosing window: rectangle = [559.653, 575.4999] x
[4174.833, 4189.85] Km
thank you.
EDIT AFTER COMMENT
system.time({s <- runiflpp(npoints(c), as.linnet(c));
+ linearK(s, r=seq(0,2,by=0.20))})
user system elapsed
343.047 104.428 449.650
EDIT 2
I made some really small changes by deleting some peripheral network segments that seem to have little or no effect on the overall network. This also lead to split some long segments into smaller segments. But now on the same network with different point pattern, I have even longer estimated time:
> month1envelope=envelope(months[[1]], linearK ,nsim = 39, r=seq(0,2,0.2))
Generating 39 simulations of CSR ...
1, 2, [etd 12:03:43]
The new network is
> months[[1]]
Point pattern on linear network
310 points
Linear network with 13642 vertices and 18392 lines
Enclosing window: rectangle = [560.0924, 575.4999] x [4175.113,
4189.85] Km
System Config: MacOS 10.9, 2.5Ghz, 16GB, R 3.3.3, RStudio Version 1.0.143
You don't need to use funargs in this context. Arguments can be passed directly through the ... argument. So I suggest
acceptance_enve <- envelope(c, linearK, nsim=19,
fix.n = TRUE, r=seq(0,2,by=0.20))
Please try this to see if it accelerates the execution.