Taxicab geometry task - geometry

So, i've been struggling for some time with this task. It sounds like this: given N points(X,Y) X,Y integers, and M questions of the form P(A, B), find the total distance from point P(A,B) to all the N given points. Distance from A(x1, y1) to B(x2, y2) = max(|x1-x2|, |y1-y2|). Maybe it sounds wierd, i'm not an english speaker, sorry for the mistakes. I'll leave here the IN/OUT
IN.txt (N = 4, M = 3, the first 4 coordinates represent the given points.
the next 3 coordinates are the points from which i have to compute the total lenght)
4 3
3 5
-3 -2
1 4
-4 -3
2 -4
1 4
4 2
OUT.txt
28
15
21

Here's some Python that should do the trick for you. Be sure to pay attention to which directory you're in when you're writing so you don't overwrite things.
I've tested it on the input information you presented in the question, and it works, providing the formatted output file as desired.
# Assuming you're in the directory to IN.txt -- otherwise, insert the filepath.
input_file = open("IN.txt", "r")
# Read the input file and split it by new lines
input_lines_raw = input_file.read().split('\n')
input_file.close()
# Split the input lines and eliminate the spaces/create the vector int lists
input_lines_split = []
for element in input_lines_raw:
input_lines_split.append(element.split(' '))
input_lines = []
for sub in input_lines_split:
inserter = []
for elem in sub:
if (len(elem) > 0):
inserter.append(elem)
input_lines.append(inserter)
input_lines = [[int(j) for j in i] for i in input_lines]
# Build the original and final vector arrays
origin_vectors = []
dest_vectors = []
for i in range(1, input_lines[0][0] + 1):
dest_vectors.append(input_lines[i])
for i in range(input_lines[0][0] + 1, input_lines[0][0] + input_lines[0][1] + 1):
origin_vectors.append(input_lines[i])
# "Distance" operations on the lists of vectors themselves/generate results array
results_arr = []
for original in origin_vectors:
counter = 0
for final in dest_vectors:
counter = counter + max(abs(original[0] - final[0]), abs(original[1] - final[1]))
results_arr.append(counter)
print(results_arr)
for element in results_arr:
print(str(element))
# Open the ouput file and write to it, creating a new one if it doesn't exist.
# NOTE: This will overrwrite any existing "OUT.txt" file in the current directory.
output_file = open("OUT.txt", "w")
for element in results_arr:
output_file.write(str(element) + '\n')
output_file.close()

Related

Detecting consecutive k points in data, which are out of specification limit - Python3

I want to create an SPC chart that will detect data points that are out of specification limits using python.
I have a data set that contains column [XX] which is the one that I'd like to test and DateTime type data as an index.
I have already come up with an idea of how to detect points that are out of spec, and points that more than k points in a row are out of spec limit. On the other hand, I assume that there has to be a better way to achieve the same outcome. Below you find my code.
`# first part to detect points that are out of spec
import plotly.graph_objects as go
# creat a upper and lower spec limit (that are used to plot a line on an SPC chart)
df['USL_MarginesG'] = 5.5
df['LSL_MarginesG'] = 3
# create empty list to contain data points that are out of spec
occ_trace_x = []
occ_trace_y =[]
# for all elements in df['XX'] I look for elements that are out of spec and append them to the created list
for y in range(len(df['XX'])):
if df['XX'].iloc[y] > 5.5 or df['XX'].iloc[y] < 3:
occ_trace_x.append(df.index[y])
occ_trace_y.append(df['XX'].iloc[y])`
The second part of the code (this part detects k points in a row that are out of spec):
`# create containers for detected data points
list_k = []
list_index = []
# input for user to write a number for k points to detect
k = int(input("Put a number"))
# for data points in df['XX'] test if a slice from [x:x+k+1] is greather/lower that the spec.
for x in range (len(df['XX'])):
if (all(df['XX'].iloc[x:x+k+1] > 5.5) or all(df['XX'].iloc[x:x+k+1] < 3)):
if True:
# take a slice from df and convert it to a list with the aim to append the lists to created containers.
s = df['XX'].iloc[x:x+k+1].to_list()
s_index = df.index[x:x+k+1].to_list()
list_k.append(s)
list_index.append(s_index)`
The next step is to unpack the nested list:
c = []
for x in list_k:
c = c + x
v = []
for b in list_index:
v = v + b`
Last step is to plot data set on a chart:
fig = go.Figure()
fig.add_trace(go.Scatter(x=df.index, y=df['XX'],
mode='lines',
name='Margines_Górny'))
fig.add_trace(go.Scatter(
x= occ_trace_x,
y= occ_trace_y,
name= "Out of Control",
mode= "markers",
marker= dict(color="rgba(210, 77, 87, 0.7)", symbol="square", size=4)))
fig.add_trace(go.Scatter(
x = v,
y = c,
name = f'{k}' + ' parameters in a row are out of control',
mode = "markers",
marker= dict(color="yellow", symbol="square", size=4)))
fig.show()`
As a result, I got a plot with data:
blue line describes the data set
red squares detect data points out of spec
yellow squares detect k data points in a row that are out of spec k
= 2
I am looking for optimization of the code (some way that I might achieve the same results in the faster way)

How to keep graph shape when read it by networkx

I have a file shows different points' coordinates(first 10 rows):
1 10.381090522139 55.39134945301
2 10.37928179195319 55.38858713256631
3 10.387152479898077 55.3923338690609
4 10.380048819655258 55.393938880906745
5 10.380679138517507 55.39459444742785
6 10.382474625286 55.392132993022
7 10.383736185130601 55.39454404088371
8 10.387334283235987 55.39433237195271
9 10.388468103023115 55.39536574771765
10 10.390814951258335 55.396308397998475
Now I want to calculate the MST(minimum spanning tree) of them so firstly I change my coordinates to weight graph(distance->weight):
n = 10
data = []
for i in range(0, n):
for j in range(i + 1, n):
temp = []
temp.append(i)
temp.append(j)
x = np.array(rawdata[i, 1:3])
y = np.array(rawdata[j, 1:3])
temp.append(np.linalg.norm(x - y))
data.append(temp)
Then, using networkx to load weight data:
G = nx.read_weighted_edgelist("data.txt")
T = nx.minimum_spanning_tree(G)
nx.draw(T)
plt.show()
but I cannot see the orignal shape from result:
how to solve this problem?
I'm just answering the question about the position of the nodes. I can't tell from what you've done whether the minimum spanning tree is what you're after or not.
When you plot a network, it will assign positions based on an algorithm that is in part stochastic. If you want the nodes to go at particular positions, you will have to include that information in the call in an optional argument. So define a dictionary (it's usually called pos) such that pos[node] is a tuple (x,y) where x is the x-coordinate of node and y is the y-coordinate of node.
Then the call is nx.draw(T, pos=pos).

How do I make my program in python faster

I have a project where I need to find the optimal triplets of number (E, F, G) such that E and F is very close to eachother (the difference is smallest) and G is bigger than E and F. I have to make n numbers of such triplets.
The way I tought about it was to sort the given list of numbers then search for the smallest differences then those two will be my E and F after all the n pairs will be done I will search for every pair of E and F a G such that G is bigger than E and F. I know this is the greedy way but my code is very slow, it takes up to 1 minute when the list is like 300k numbers and i have to do 2k triplets. Any idea on how to improve the code?
guests is n (the number of triplets)
sticks is the list of all the numbers
# We sort the list using the inbuilt function sticks.sort()
save = guests # Begining to search for the best pairs of E and F
efficiency = 0 while save != 0:
difference = 1000000 # We asign a big value to difference each time
# Searching for the smallest difference between two elements
for i in range(0, length - 1):
if sticks[i+1] - sticks[i] < difference:
temp_E = i
temp_F = i + 1
difference = sticks[i+1] - sticks[i]
# Saving the two elements in list stick_E and stick_F
stick_E.append(sticks[temp_E])
stick_F.append(sticks[temp_F])
# Calculating the efficiency
efficiency += ((sticks[temp_F] - sticks[temp_E]) * (sticks[temp_F] - sticks[temp_E]))
# Deleting the two elements from the main list
sticks.pop(temp_E)
sticks.pop(temp_E)
length -= 2
save -= 1
# Searching for stick_G for every pair made for i in range(0, len(stick_F)):
for j in range(0, length):
if stick_F[i] < sticks[j]:
stick_G.append(sticks[j]) # Saves the element found
sticks.pop(j) # Deletes the element from the main list
length -= 1
break
> # Output the result to a local file print_to_file(stick_E, stick_F, stick_G, efficiency, output_file)
I commented the code the best I could so it would be easier for you to understand.

Karatsuba recursive code is not working correctly

I want to implement Karatsuba multiplication algorithm in python.But it is not working completely.
The code is not working for the values of x or y greater than 999.For inputs below 1000,the program is showing correct result.It is also showing correct results on base cases.
#Karatsuba method of multiplication.
f = int(input()) #Inputs
e = int(input())
def prod(x,y):
r = str(x)
t = str(y)
lx = len(r) #Calculation of Lengths
ly = len(t)
#Base Case
if(lx == 1 or ly == 1):
return x*y
#Other Case
else:
o = lx//2
p = ly//2
a = x//(10*o) #Calculation of a,b,c and d.
b = x-(a*10*o) #The Calculation is done by
c = y//(10*p) #calculating the length of x and y
d = y-(c*10*p) #and then dividing it by half.
#Then we just remove the half of the digits of the no.
return (10**o)*(10**p)*prod(a,c)+(10**o)*prod(a,d)+(10**p)*prod(b,c)+prod(b,d)
print(prod(f,e))
I think there are some bugs in the calculation of a,b,c and d.
a = x//(10**o)
b = x-(a*10**o)
c = y//(10**p)
d = y-(c*10**p)
You meant 10 to the power of, but wrote 10 multiplied with.
You should train to find those kinds of bugs yourself. There are multiple ways to do that:
Do the algorithm manually on paper for specific inputs, then step through your code and see if it matches
Reduce the code down to sub-portions and see if their expected value matches the produced value. In your case, check for every call of prod() what the expected output would be and what it produced, to find minimal input values that produce erroneous results.
Step through the code with the debugger. Before every line, think about what the result should be and then see if the line produces that result.

find primes in a certain range efficiently

This is code an algorithm I found for Sieve of Eratosthenes for python3. What I want to do is edit it so the I can input a range of bottom and top and then input a list of primes up to the bottom one and it will output a list of primes within that range.
However, I am not quite sure how to do that.
If you can help that would be greatly appreciated.
from math import sqrt
def sieve(end):
if end < 2: return []
#The array doesn't need to include even numbers
lng = ((end//2)-1+end%2)
# Create array and assume all numbers in array are prime
sieve = [True]*(lng+1)
# In the following code, you're going to see some funky
# bit shifting and stuff, this is just transforming i and j
# so that they represent the proper elements in the array.
# The transforming is not optimal, and the number of
# operations involved can be reduced.
# Only go up to square root of the end
for i in range(int(sqrt(end)) >> 1):
# Skip numbers that aren’t marked as prime
if not sieve[i]: continue
# Unmark all multiples of i, starting at i**2
for j in range( (i*(i + 3) << 1) + 3, lng, (i << 1) + 3):
sieve[j] = False
# Don't forget 2!
primes = [2]
# Gather all the primes into a list, leaving out the composite numbers
primes.extend([(i << 1) + 3 for i in range(lng) if sieve[i]])
return primes
I think the following is working:
def extend_erathostene(A, B, prime_up_to_A):
sieve = [ True ]* (B-A)
for p in prime_up_to_A:
# first multiple of p greater than A
m0 = ((A+p-1)/p)*p
for m in range( m0, B, p):
sieve[m-A] = False
limit = int(ceil(sqrt(B)))
for p in range(A,limit+1):
if sieve[p-A]:
for m in range(p*2, B, p):
sieve[m-A] = False
return prime_up_to_A + [ A+c for (c, isprime) in enumerate(sieve) if isprime]
This problem is known as the "segmented sieve of Eratosthenes." Google gives several useful references.
You already have the primes from 2 to end, so you just need to filter the list that is returned.
One way is to run the sieve code with end = top and modify the last line to give you only numbers bigger than bottom:
If the range is small compared with it's magnitude (i.e. top-bottom is small compared with bottom), then you better use a different algorithm:
Start from bottom and iterate over the odd numbers checking whether they are prime. You need an isprime(n) function which just checks whether n is divisible by all the odd numbers from 1 to sqrt(n):
def isprime(n):
i=2
while (i*i<=n):
if n%i==0: return False
i+=1
return True

Resources