How many objects are eligible for the garbage collector? - garbage-collection

Could you please check my solution to the question: "How many objects are eligible for the garbage collector on the line (// custom code)?"
class Dog {
String name;
}
public class TestGarbage {
public static void main(String[] args) {
Dog d1 = new Dog();
Dog d2 = new Dog();
Dog d3 = new Dog();
d1 = d3; // line 1
d3 = d2; // line 2
d2 = null; // line 3
// custom code
}
}
What I know from the documenation: "The object will not become a candidate for garbage collection until all references to it are discarded."
Objects and references: (Where A, B and C are the Objects created)
d1 -> A
d2 -> B
d3 -> C
------- d1 = d3 ------
d1 -> C
d2 -> B
d3 -> C
------- d3 = d2 ------
d1 -> C
d2 -> B
d3 -> B
------- d2 = null ------
d1 -> C
d2 -> null
d3 -> B
A is eligible to delete, so we can say that there is ONLY ONE object which is eligible for the garbage collector!
Is this approach right?

You are correct that only A, the first Dog object created, is obviously available for collection.
However, if your "custom code" does not include any references to d1, d2, or d3 then those variables could be be discarded at any time by the compiler leaving all Dog objects as GC candidates.

Related

Making a vector of specific length with random numbers

I tried writing a program that would give me the i,j and k components of a vector of a specified magnitude.
import random
import math
while True:
a = random.uniform(0,1000)
b = random.uniform(0,1000)
c = random.uniform(0,1000)
d = 69.420
if math.sqrt(a**2 + b**2 + c**2) == d:
print(a,b,c)
break
But it seems that this program might take literally forever to give me an output.
What would faster or possible solution be?
#Update
import random
import math
while True:
a2 = random.uniform(1,1000)
b2 = random.uniform(1,1000)
c2 = random.uniform(1,1000)
d = 69.420
d2 = a2 + b2 + c2
a2 *= d/d2
b2 *= d/d2
c2 *= d/d2
a = math.sqrt(a2)
b = math.sqrt(b2)
c = math.sqrt(c2)
if math.sqrt(a**2 + b**2 + c**2) == d:
print(a,b,c)
break
As per suggested, but still taking a very long time to compute
Get three random numbers a2, b2, and c2. Those are your random squares. Add them up to get d2. You want the sum of the squares to be d squared, so multiply a2, b2, and c2 by d*d/d2. These are your new squares that add up to d squared. Now assign the square roots of a2, b2, and c2 to a, b, and c. Does that make sense?
Avoid dividing by zero. If d2 happens to be zero, just start over.

Sqlite join columns on mapping of values

I want to be able to join two tables, where there is a mapping between the column values, rather than their values matching.
So rather than:
A|m | f B|m | f
a1 1 b1 1
a2 2 b2 3
a3 3 b3 5
SELECT a1, a2, b1, b2
FROM A
INNER JOIN B on B.f = A.f
giving:
|m| A.f B.f |m|
a1 1 1 b1
a3 3 3 b2
Given then mapping (1->a)(2->b)(3->c)
A|m | f B|m | f
a1 1 b1 a
a2 2 b2 b
a3 3 b3 c
to give when joined on f:
|m| A.f B.f |m|
a1 1 a b1
a3 3 c b2
The question below seems to be trying something similar, but they seem to want to change the column values, I just want the mappng to be part of the query, I don't want to change the column values thenselves. Besides it is in R and I'm working in Python.
Mapping column values
One solution is to create a temporary table of mappings AB:
CREATE TEMP TABLE AB (a TEXT, b TEXT, PRIMARY KEY(a, b));
Then insert mappings,
INSERT INTO temp.AB VALUES (1, "a"), (2, "b"), (3, "c");
or executemany with params.
Then select using intermediary table.
SELECT A.m AS Am, A.f AS Af, B.f AS Bf, B.m AS Bm
FROM A
LEFT JOIN temp.AB ON A.f=AB.a
LEFT JOIN B ON B.f=AB.b;
If you don't want to create an intermediary table, another solution would be building the query yourself.
mappings = ((1,'a'), (3,'c'))
sql = 'SELECT A.m AS Am, A.f AS Af, B.f AS Bf, B.m AS Bm FROM A, B WHERE ' \
+ ' OR '.join(['(A.f=? AND B.f=?)'] * len(mappings))
c.execute(sql, [i for m in mappings for i in m])

elegant way to iterate & compare in Spark DataFrame

I have a Spark DataFrame with 2 columns: C1:Seq[Any] and C2:Double. I want to
Sort by length of C1.
For each element c1 in C1, compare with every other element in C1 that is longer than c1.
2.1 If c1 is contained in an another element cx, then compare c2 with c2x.
2.2 If c2 > c2x, then filter out (c1x, c2x).
Is there an elegant way to achieve this?
Sample Input:
C1 C2
ab 1.0
abc 0.5
Expected output:
C1 C2
ab 1.0
Contain = subset. e.g. ab is contained in abc.
I have a Spark DataFrame with 2 columns: C1:Seq[Any] and C2:Double
val rdd = sc.parallelize(List(("ab", 1.0), ("abc", 0.5)))
Sort by length of C1.
val rddSorted = rdd.sortBy(_._1.length).collect().distinct
For each element c1 in C1, compare with every other element in C1 that is longer than c1.
2.1 If c1 is contained in an another element cx, then compare c2 with c2x.
2.2 If c2 > c2x, then filter out (c1x, c2x).
val result = for(
(x, y) <- rddSorted;
(a, b) <- rddSorted.dropWhile{case(c,d) => c == x && d == y};
if(a.contains(x) && a.length > x.length && y > b)
)yield (x, y)
Thats all. You should get what you are looking for

How do I solve two equations where one equation has a variable that takes a range of values?

Two equations ---
c + w + u = 50;
c-w/4 = 39
In the above, 'u' takes integer values from 0 to 11. (Why/How the 11? 11 = 50-39).
I need to output a table of values for c and w for each value of u starting with u = 0. The corresponding value of u must also appear across each row of values for c and w.
How do I write a VBA code for this?
Many thanks in advance!
Do a little algebra first:
c-w/4 = 39
c = 39+w/4
c+w+u = 50
39+w/4+w+u = 50
w/4+w+u = 50-39
w/4+w = 11-u
1.25*w = 11-u
w = (11-u)/1.25
Put the u values in column A. In B1 enter:
=(11-A1)/1.25
and copy down. In C1 enter:
=39+B1/4
and copy down.
The w values are in column B and the c values are in column C.

How to flatten dependency graph?

I am new with Apache Spark,
can i get a snippet of how to implement 'flattening' for dependency graph?
i.e lets say I have:
nodes :A,B,C
edges : (A,B),(B,C)
it would result with a new Graph:
nodes:A,B,C
edges:(A,B)(A,C)(B,C)
1) Presuming each node is in its own row
A
B
C
2) Do a CROSS JOIN with self as first step.
A A
A B
A C
B A
B B
B C
C A
C B
C C
2) In second step filter out all the rows where Node name is repeated.
A B
A C
B A
B C
C A
C B
3) Post that derive another field from two fields that would tell you the edge.
A B AB
A C AC
B A BA
B C BC
C A CA
C B CB
You would need to convert this into the (Scala/Python) syntax though. Hope this helps.

Resources