Multithreading in scala - multithreading

I was given a challenge recently in school to create a simple program in Scala the does some calculations in a matrix, the thing is I have to do these calculations using 5 threads, since I had no prior knowledge of Scala I am stuck. I searched online but I did not find how to create the exact number of threads I want. This is the code:
import scala.math
object Test{
def main(args: Array[String]){
val M1: Seq[Seq[Int]] = List(
List(1, 2, 3),
List(4, 5, 6),
List(7, 8, 9)
)
var tempData : Float= 0
var count:Int = 1
var finalData:Int=0
for(i<-0 to M1.length-1; j<-0 to M1(0).length-1){
count = 1
tempData = M1(i)(j)+ calc(i-1,j)+calc(i,j-1)+calc(i+1,j)
finalData = math.ceil(tempData/count).toInt
printf("%d ", finalData)
}
def calc(i:Int, j:Int): Int ={
if((i<0)|| (j<0) || (i>M1.length-1))
return 0
else{
count +=1
return M1(i)(j)}
}
}
I tried this:
for (a <- 0 until 1) {
val thread = new Thread {
override def run {
for(i<-0 to M1.length-1; j<-0 to M1(0).length-1){
count = 1
tempData = M1(i)(j)+ calc(i-1,j)+calc(i,j-1)+calc(i+1,j)
finalData = math.ceil(tempData/count).toInt
printf("%d ", finalData)
}
}
}
thread.start
}
but it only executed the same thing 10 times

Here's the original core of the calculation.
for(i<-0 to M1.length-1; j<-0 to M1(0).length-1){
count = 1
tempData = M1(i)(j)+ calc(i-1,j)+calc(i,j-1)+calc(i+1,j)
finalData = math.ceil(tempData/count).toInt
printf("%d ", finalData)
}
Let's actually build a result array
val R = Array.ofDim[Int](M1.length, M1(0).length)
var tempData : Float= 0
var count:Int = 1
var finalData:Int=0
for(i<-0 to M1.length-1; j<-0 to M1(0).length-1){
count = 1
tempData = M1(i)(j)+ calc(i-1,j)+calc(i,j-1)+calc(i+1,j)
R(i)(j) = math.ceil(tempData/count).toInt
}
Now, that mutable count modified in one function and referenced in another is a bit of a code smell. Let's remove it - change calc to return an option, assemble a list of the things to average, and flatten to keep only the Some
val R = Array.ofDim[Int](M1.length, M1(0).length)
for (i <- 0 to M1.length - 1; j <- 0 to M1(0).length - 1) {
val tempList = List(Some(M1(i)(j)), calc(i - 1, j), calc(i, j - 1), calc(i + 1, j)).flatten
R(i)(j) = math.ceil(tempList.sum.toDouble / tempList.length).toInt
}
def calc(i: Int, j: Int): Option[Int] = {
if ((i < 0) || (j < 0) || (i > M1.length - 1))
None
else {
Some(M1(i)(j))
}
}
Next, a side-effecting for is a bit of a code smell too. So in the inner loop, let's produce each row and in the outer loop a list of the rows...
val R = for (i <- 0 to M1.length - 1) yield {
for (j <- 0 to M1(0).length - 1) yield {
val tempList = List(Some(M1(i)(j)), calc(i - 1, j), calc(i, j - 1), calc(i + 1, j)).flatten
math.ceil(tempList.sum / tempList.length).toInt
}
}
Now, we read the Scala API and we notice ParSeq and Seq.par so we'd like to work with map and friends. So let's un-sugar the for comprehensions
val R = (0 until M1.length).map { i =>
(0 until M1(0).length).map { j =>
val tempList = List(Some(M1(i)(j)), calc(i - 1, j), calc(i, j - 1), calc(i + 1, j)).flatten
math.ceil(tempList.sum / tempList.length).toInt
}
}
This is our MotionBlurSingleThread. To make it parallel, we simply do
val R = (0 until M1.length).par.map { i =>
(0 until M1(0).length).par.map { j =>
val tempList = List(Some(M1(i)(j)), calc(i - 1, j), calc(i, j - 1), calc(i + 1, j)).flatten
math.ceil(tempList.sum / tempList.length).toInt
}.seq
}.seq
And this is our MotionBlurMultiThread. And it is nicely functional too (no mutable values)
The limit to 5 or 10 threads isn't in the challenge on Github, but if you need to do that you can look at scala parallel collections degree of parallelism and related questions

I am not an expert, neither on Scala nor on concurrency.
Scala approach to concurrency is through the use of actors and messaging, you can read a little about that here, Programming in Scala, chapter 30 Actors and Concurrency (the first edition is free but it is outdated). As I was telling, the edition is outdated and in the latest version of Scala (2.12) the actors library is no longer included, and they recommend to use Akka, you can read about that here.
So, I would not recommend learning about Scala, Sbt and Akka just for a challenge, but you can download an Akka quickstart here and customize the example given to your needs, it is nicely explained in the link. Each instance of the Actor has his own thread. You can read about actors and threads here, in specific, the section about state.

Related

Iterate every row of a spark dataframe without using collect

I want to iterate every row of a dataframe without using collect. Here is my current implementation:
val df = spark.read.csv("/tmp/s0v00fc/test_dir")
import scala.collection.mutable.Map
var m1 = Map[Int, Int]()
var m4 = Map[Int, Int]()
var j = 1
def Test(m:Int, n:Int):Unit = {
if (!m1.contains(m)) {
m1 += (m -> j)
m4 += (j -> m)
j += 1
}
if (!m1.contains(n)) {
m1 += (n -> j)
m4 += (j -> n)
j += 1
}
df.foreach { row => Test(row(0).toString.toInt, row(1).toString.toInt) }
This does not give any error but m1 and m4 are still empty. I can get the result I am expecting if I do a df.collect as shown below -
df.collect.foreach { row => Test(row(0).toString.toInt, row(1).toString.toInt) }
How do I execute the custom function "Test" on every row of the dataframe without using collect
According to the Spark documentation for foreach:
"Note: modifying variables other than Accumulators outside of the foreach()may result in undefined behavior. See Understanding closures for more details."
https://spark.apache.org/docs/latest/rdd-programming-guide.html#actions

DP Print (not count) all possible path classic climbing stair

I came across this classic question and found may many solution to it. for loop and DP/ reclusive + memorization.
Also found a twisted version of the questions asking to print all possible path instead of counting. Wondering for the twisted version, if we have DP solution ?
Q: If there are n stairs, you can either take 1 or 2 steps at a time, how may way can you finish the stairs. we can just using fib to calculate it. What if you are ask print out all possible ways(not revision please). For example, if n = 5. we have as solution. pseudo code is welcome or any language.
[1, 1, 1, 1, 1]
[1, 1, 1, 2]
[1, 1, 2, 1]
[1, 2, 1, 1]
[1, 2, 2]
[2, 1, 1, 1]
[2, 1, 2]
[2, 2, 1]
I have divided the solution into two subsections. First one using Memoization and the second one using Recursion.
Hope it helps!
Memoization Approach: It uses an array and calculates forward the solution based on base condition. I am using an array of type string array to store all the possible paths. To add a new path we are performing cartesian using Union.
Example:
To reach 1 we have path {1}
To reach 2 we have two paths {1, 2}
To reach 3 we have three paths {1 1 1, 1 2, 2 1} which is cartesian of above two paths.
Note: I have used two arrays just to make the solution understandable. We should be good with a single array.
Demo Memoization Approach
Full Program using Memoization Approach:
namespace Solutions
{
using System;
using System.Linq;
class Program
{
static void Main()
{
// Total Number of steps in stairs
var totalNumberOfSteps = 4;
// Total Number of allowed steps
var numberOfStepsAllowed = 2;
dynamic result = ClimbSteps(numberOfStepsAllowed, totalNumberOfSteps);
Console.WriteLine(result.Mem);
Console.WriteLine(string.Join(", ", result.Print));
Console.ReadLine();
}
private static dynamic ClimbSteps(int numberOfStepsAllowed, int totalNumberOfSteps)
{
var memList = Enumerable.Repeat(0, totalNumberOfSteps + 1).ToArray();
var printList = new string[totalNumberOfSteps + 1][];
if (numberOfStepsAllowed != 0)
{
memList[0] = 0;
printList[0] = new[] { "" };
memList[1] = 1;
printList[1] = new[] { "1" };
memList[2] = 2;
printList[2] = numberOfStepsAllowed > 1 ? new[] { "1 1", "2" } : new[] { "1 1" };
for (var indexTot = 3; indexTot <= totalNumberOfSteps; indexTot++)
{
for (var indexSteps = 1; indexSteps <= numberOfStepsAllowed && indexTot - indexSteps > 0; indexSteps++)
{
var indexTotalStep = indexTot;
var indexAllowedStep = indexSteps;
memList[indexTot] += memList[indexTot - indexSteps];
var cartesianValues = (from x in printList[indexSteps] from y in printList[indexTotalStep - indexAllowedStep] select x + " " + y)
.Union(from x in printList[indexSteps] from y in printList[indexTotalStep - indexAllowedStep] select y + " " + x).Distinct();
printList[indexTot] = printList[indexTot] == null
? cartesianValues.ToArray()
: printList[indexTot].Union(cartesianValues).Distinct().ToArray();
}
}
}
return new { Mem = memList[totalNumberOfSteps], Print = printList[totalNumberOfSteps] };
}
}
}
Output:
5
1 1 1 1, 1 1 2, 1 2 1, 2 1 1, 2 2
Recursive Approach
Demo Recursive Approach
Full Program using Recursive Approach:
namespace Solutions
{
using System;
class Program
{
static void Main()
{
// Total Number of steps in stairs
var totalNumberOfSteps = 4;
// Total Number of allowed steps
var numberOfStepsAllowed = 2;
ClimbSteps(numberOfStepsAllowed, totalNumberOfSteps);
Console.ReadLine();
}
private static void ClimbSteps(int numberOfStepsAllowed, int totalNumberOfSteps)
{
// Reach from [totalNumberOfSteps - [1..numberOfStepsAllowed]]
ClimbStep(stepsAllowed: numberOfStepsAllowed, totalNumberOfSteps: totalNumberOfSteps, currentStep: 0, stepsTaken: String.Empty);
}
private static void ClimbStep(int stepsAllowed, int totalNumberOfSteps, int currentStep, string stepsTaken)
{
if (currentStep == totalNumberOfSteps)
{
Console.WriteLine(stepsTaken);
}
for (int i = 1; i <= stepsAllowed && currentStep + i <= totalNumberOfSteps; i++)
{
ClimbStep(stepsAllowed, totalNumberOfSteps, currentStep + i, stepsTaken + i + " ");
}
}
}
}
Ouput:
1 1 1 1
1 1 2
1 2 1
2 1 1
2 2

Spark is taking too much time and creating thousands of jobs for some tasks

Machine Config :
RAM: 16 gb
Processor: 4 cores(Xeon E3 3.3 GHz)
Problem:
Time Consuming : Taking more than 18 minutes
Case Scenario :
Spark Mode: Local
Database: Using Cassandra 2.1.12
I am fetching 3 tables into dataframes , which is having less than 10 rows. yes, less than 10 (ten).
After fetching it into dataframes I performing joins,count,show and collect operation many times. When I execute my program Spark is creating 40404 jobs 4 times. it indicates that count requires to perform those jobs. I am using count 4-5 times in program. After waiting for more than 18 minutes(approx 18.5 to 20) it gives me expected output.
why Spark is creating that much of jobs?
is it obvious ('ok') to take this much time (18 minutes) to execute this number of jobs(40404 * 4 approx)?
Thanks in advance.
Sample code 1:
def getGroups(id: Array[String], level: Int): DataFrame = {
var lvl = level
if (level >= 0) {
for (iterated_id <- id) {
val single_level_group = supportive_df.filter("id = '" + iterated_id + "' and level = " + level).select("family_id")
//single_level_group.show()
intermediate_df = intermediate_df.unionAll(single_level_group)
//println("for loop portion...")
}
final_df = final_df.unionAll(intermediate_df)
lvl -= 1
val user_id_param = intermediate_df.collect().map { row => row.getString(0) }
intermediate_df = empty_df
//println("new method...if portion...")
getGroups(user_id_param, lvl)
} else {
//println("new method...")
final_df.distinct()
}
}
Sample code 2:
setGetGroupsVars("u_id", user_id.toString(), sa_user_df)
var user_belong_groups: DataFrame = empty_df
val user_array = Array[String](user_id.toString())
val user_levels = sa_user_df.filter("id = '" + user_id + "'").select("level").distinct().collect().map { x => x.getInt(0) }
println(user_levels.length+"...rapak")
println(user_id.toString())
for (u_lvl <- user_levels) {
val x1 = getGroups(user_array, u_lvl)
x1.show()
empty_df.show()
user_belong_groups.show()
user_belong_groups = user_belong_groups.unionAll(x1)
x1.show()
}
setGetGroupsVars("obj_id", obj_id.toString(), obj_type_specific_df)
var obj_belong_groups: DataFrame = empty_df
val obj_array = Array[String](obj_id.toString())
val obj_levels = obj_type_specific_df.filter("id = '" + obj_id + "'").select("level").distinct().collect().map { x => x.getInt(0) }
println(obj_levels.length)
for (ob_lvl <- obj_levels) {
obj_belong_groups = obj_belong_groups.unionAll(getGroups(obj_array, ob_lvl))
}
user_belong_groups = user_belong_groups.distinct()
obj_belong_groups = obj_belong_groups.distinct()
var user_obj_joined_df = user_belong_groups.join(obj_belong_groups)
user_obj_joined_df.show()
println("vbgdivsivbfb")
var user_obj_access_df = user_obj_joined_df
.join(sa_other_access_df, user_obj_joined_df("u_id") === sa_other_access_df("user_id")
&& user_obj_joined_df("obj_id") === sa_other_access_df("object_id"))
user_obj_access_df.show()
println("KDDD..")
val user_obj_access_cond1 = user_obj_access_df.filter("u_id = '" + user_id + "' and obj_id != '" + obj_id + "'")
if (user_obj_access_cond1.count() == 0) {
val user_obj_access_cond2 = user_obj_access_df.filter("u_id != '" + user_id + "' and obj_id = '" + obj_id + "'")
if (user_obj_access_cond2.count() == 0) {
val user_obj_access_cond3 = user_obj_access_df.filter("u_id != '" + user_id + "' and obj_id != '" + obj_id + "'")
if (user_obj_access_cond3.count() == 0) {
default_df
} else {
val result_ugrp_to_objgrp = user_obj_access_cond3.select("permission").agg(max("permission"))
println("cond4")
result_ugrp_to_objgrp
}
} else {
val result_ugrp_to_ob = user_obj_access_cond2.select("permission")
println("cond3")
result_ugrp_to_ob
}
} else {
val result_u_to_obgrp = user_obj_access_cond1.select("permission")
println("cond2")
result_u_to_obgrp
}
} else {
println("cond1")
individual_access
}
These two are major code block in my prog where the execution is taking too longer. It generally takes much time at show or count operation.
First you can check in GUI that which stage of your program is taking long time.
Second is you are using distinct() many times, So while use distinct() you have to look how many number of partitions are comes after distinct. I thought that's the reason why spark creating thousand of jobs.
If that is the reason you can use coalesce() after distinct().
Ok, so let's remember some basics !
Spark is a lazy, and show and count are actions.
An action triggers transformations, which you have loads of. And in case you are pooling data from Cassandra (or any other source) this cost a lot since you don't seem to be caching your transformations !
So, you need to consider caching when you compute intensively on a DataFrame or RDD, that will make your actions get performed faster !
Concerning the reason why you have that many tasks (jobs) is of-course explain by spark parallelism mechanism to perform you actions times the number of transformations/actions you are executing, not mentioning the loops !
Nevertheless, still with the information given and the quality of the code snippets posted in the question, this is as far as my answer goes.
I hope this helps !

Parallel Merge Sort in Scala

I have been trying to implement parallel merge sort in Scala. But with 8 cores, using .sorted is still about twice as fast.
edit:
I rewrote most of the code to minimize object creation. Now it runs about as fast as the .sorted
Input file with 1.2M integers:
1.333580 seconds (my implementation)
1.439293 seconds (.sorted)
How should I parallelize this?
New implementation
object Mergesort extends App
{
//=====================================================================================================================
// UTILITY
implicit object comp extends Ordering[Any] {
def compare(a: Any, b: Any) = {
(a, b) match {
case (a: Int, b: Int) => a compare b
case (a: String, b: String) => a compare b
case _ => 0
}
}
}
//=====================================================================================================================
// MERGESORT
val THRESHOLD = 30
def inssort[A](a: Array[A], left: Int, right: Int): Array[A] = {
for (i <- (left+1) until right) {
var j = i
val item = a(j)
while (j > left && comp.lt(item,a(j-1))) {
a(j) = a(j-1)
j -= 1
}
a(j) = item
}
a
}
def mergesort_merge[A](a: Array[A], temp: Array[A], left: Int, right: Int, mid: Int) : Array[A] = {
var i = left
var j = right
while (i < mid) { temp(i) = a(i); i+=1; }
while (j > mid) { temp(i) = a(j-1); i+=1; j-=1; }
i = left
j = right-1
var k = left
while (k < right) {
if (comp.lt(temp(i), temp(j))) { a(k) = temp(i); i+=1; k+=1; }
else { a(k) = temp(j); j-=1; k+=1; }
}
a
}
def mergesort_split[A](a: Array[A], temp: Array[A], left: Int, right: Int): Array[A] = {
if (right-left == 1) a
if ((right-left) > THRESHOLD) {
val mid = (left+right)/2
mergesort_split(a, temp, left, mid)
mergesort_split(a, temp, mid, right)
mergesort_merge(a, temp, left, right, mid)
}
else
inssort(a, left, right)
}
def mergesort[A: ClassTag](a: Array[A]): Array[A] = {
val temp = new Array[A](a.size)
mergesort_split(a, temp, 0, a.size)
}
Previous implementation
Input file with 1.2M integers:
4.269937 seconds (my implementation)
1.831767 seconds (.sorted)
What sort of tricks there are to make it faster and cleaner?
object Mergesort extends App
{
//=====================================================================================================================
// UTILITY
val StartNano = System.nanoTime
def dbg(msg: String) = println("%05d DBG ".format(((System.nanoTime - StartNano)/1e6).toInt) + msg)
def time[T](work: =>T) = {
val start = System.nanoTime
val res = work
println("%f seconds".format((System.nanoTime - start)/1e9))
res
}
implicit object comp extends Ordering[Any] {
def compare(a: Any, b: Any) = {
(a, b) match {
case (a: Int, b: Int) => a compare b
case (a: String, b: String) => a compare b
case _ => 0
}
}
}
//=====================================================================================================================
// MERGESORT
def merge[A](left: List[A], right: List[A]): Stream[A] = (left, right) match {
case (x :: xs, y :: ys) if comp.lteq(x, y) => x #:: merge(xs, right)
case (x :: xs, y :: ys) => y #:: merge(left, ys)
case _ => if (left.isEmpty) right.toStream else left.toStream
}
def sort[A](input: List[A], length: Int): List[A] = {
if (length < 100) return input.sortWith(comp.lt)
input match {
case Nil | List(_) => input
case _ =>
val middle = length / 2
val (left, right) = input splitAt middle
merge(sort(left, middle), sort(right, middle + length%2)).toList
}
}
def msort[A](input: List[A]): List[A] = sort(input, input.length)
//=====================================================================================================================
// PARALLELIZATION
//val cores = Runtime.getRuntime.availableProcessors
//dbg("Detected %d cores.".format(cores))
//lazy implicit val ec = ExecutionContext.fromExecutorService(Executors.newFixedThreadPool(cores))
def futuremerge[A](fa: Future[List[A]], fb: Future[List[A]])(implicit order: Ordering[A], ec: ExecutionContext) =
{
for {
a <- fa
b <- fb
} yield merge(a, b).toList
}
def parallel_msort[A](input: List[A], length: Int)(implicit order: Ordering[A]): Future[List[A]] = {
val middle = length / 2
val (left, right) = input splitAt middle
if(length > 500) {
val fl = parallel_msort(left, middle)
val fr = parallel_msort(right, middle + length%2)
futuremerge(fl, fr)
}
else {
Future(msort(input))
}
}
//=====================================================================================================================
// MAIN
val results = time({
val src = Source.fromFile("in.txt").getLines
val header = src.next.split(" ").toVector
val lines = if (header(0) == "i") src.map(_.toInt).toList else src.toList
val f = parallel_msort(lines, lines.length)
Await.result(f, concurrent.duration.Duration.Inf)
})
println("Sorted as comparison...")
val sorted_src = Source.fromFile(input_folder+"in.txt").getLines
sorted_src.next
time(sorted_src.toList.sorted)
val writer = new PrintWriter("out.txt", "UTF-8")
try writer.print(results.mkString("\n"))
finally writer.close
}
My answer is probably going to be a bit long, but i hope that it will be useful for both you and me.
So, first question is: "how scala is doing sorting for a List?" Let's have a look at the code from scala repo!
def sorted[B >: A](implicit ord: Ordering[B]): Repr = {
val len = this.length
val b = newBuilder
if (len == 1) b ++= this
else if (len > 1) {
b.sizeHint(len)
val arr = new Array[AnyRef](len) // Previously used ArraySeq for more compact but slower code
var i = 0
for (x <- this) {
arr(i) = x.asInstanceOf[AnyRef]
i += 1
}
java.util.Arrays.sort(arr, ord.asInstanceOf[Ordering[Object]])
i = 0
while (i < arr.length) {
b += arr(i).asInstanceOf[A]
i += 1
}
}
b.result()
}
So what the hell is going on here? Long story short: with java. Everything else is just size justification and casting. Basically this is the line which defines it:
java.util.Arrays.sort(arr, ord.asInstanceOf[Ordering[Object]])
Let's go one level deeper into JDK sources:
public static <T> void sort(T[] a, Comparator<? super T> c) {
if (c == null) {
sort(a);
} else {
if (LegacyMergeSort.userRequested)
legacyMergeSort(a, c);
else
TimSort.sort(a, 0, a.length, c, null, 0, 0);
}
}
legacyMergeSort is nothing but single threaded implementation of merge sort algorithm.
The next question is: "what is TimSort.sort and when do we use it?"
To my best knowledge default value for this property is false, which leads us to TimSort.sort algorithm. Description can be found here. Why is it better? Less comparisons that in merge sort according to comments in JDK sources.
Moreover you should be aware that it is all single threaded, so no parallelization here.
Third question, "your code":
You create too many objects. When it comes to performance, mutation (sadly) is your friend.
Premature optimization is the root of all evil -- Donald Knuth. Before making any optimizations (like parallelism), try to implement single threaded version and compare the results.
Use something like JMH to test performance of your code.
You should not probably use Stream class if you want to have the best performance as it does additional caching.
I intentionally did not give you answer like "super-fast merge sort in scala can be found here", but just some tips for you to apply to your code and coding practices.
Hope it will help you.

How should I fix my infinite while loop that takes in 3 conditions? Also stylistic questions for novice

writing code to test the Hailstone Sequence, also called Collatz conjecture. Code will print out the number of iterations of the Hailstone sequence.
def main():
start_num = eval (input ("Enter starting number of the range: "))
end_num = eval (input ("Enter ending number of the range: "))
The main problem is that my code returns an infinite loop. I want to check all of these conditions in one statement
while (start_num > 0 and end_num > 0 and end_num > start_num):
cycle_length = 0
max_length = 0
max_number = 0
my code seems inefficient, there is probably a better way to approach the problem
for i in range(start_num, (end_num + 1)):
cycle_length = 0
while (i != 1):
if (i % 2 == 0):
i = i // 2
cycle_length += 1
if (i % 2 == 1):
i = ((3 * i) + 1)
cycle_length += 1
print (cycle_length)
I just started coding, and I always know that there is a more efficient way to approach these problems. Any suggestions on methodology, problem solving, or stylistic advice would be greatly appreciated.
Here is an answer in java. I assume that we will not start with 1.
public static void main(String[] args) {
int counter =0;
Scanner sc = new Scanner(System.in);
System.out.println("Give us a number to start with:");
int start = sc.nextInt();
System.out.println("Give us a number to end with:");
int end = sc.nextInt();
if (end > start) {
for (int i = 0; i <= end - start; i++) {
counter = 0;
int num = start + i;
int temp = num;
while(temp != 1) {
if ( temp % 2 == 0 ) {
temp = temp / 2;
} else {
temp = 3* temp +1;
}
counter++;
}
System.out.println(num + " takes " + counter + "iterations.");
}
} else {
System.out.println("Your numbers do not make sense.");
}
}
Here's an answer in python in case you're staying up late trying to solve this problem. :P Have a good night.
start_num = 1
end_num = 10
for i in range(start_num, (end_num + 1)):
cycle_length=0
num = i
while (num != 1):
if (num % 2 == 0):
num = num // 2
cycle_length+=1
else:
num = ((3 * num) + 1)
cycle_length+=1
print(cycle_length)

Resources