Smart cast to 'Bitmap!' is impossible, because 'textBitmap' is a local variable that is captured by a changing closure - android-studio

when ever I build my project I got this error
here is the kotlin class code
var textBitmap: Bitmap? = null
dynamicItem.dynamicText[imageKey]?.let { drawingText ->
dynamicItem.dynamicTextPaint[imageKey]?.let { drawingTextPaint ->
drawTextCache[imageKey]?.let {
textBitmap = it
} ?: kotlin.run {
textBitmap = Bitmap.createBitmap(drawingBitmap.width, drawingBitmap.height, Bitmap.Config.ARGB_8888)
val drawRect = Rect(0, 0, drawingBitmap.width, drawingBitmap.height)
val textCanvas = Canvas(textBitmap)
drawingTextPaint.isAntiAlias = true
val fontMetrics = drawingTextPaint.getFontMetrics();
val top = fontMetrics.top
val bottom = fontMetrics.bottom
val baseLineY = drawRect.centerY() - top / 2 - bottom / 2
textCanvas.drawText(drawingText, drawRect.centerX().toFloat(), baseLineY, drawingTextPaint);
drawTextCache.put(imageKey, textBitmap as Bitmap)
}
I couldn't figure out how to fix it

Instead of doing nested let like that, i would prefer to do some guard clause
val drawingText = dynamicItem.dynamicText[imageKey] ?: return // or you could assign an empty string `?: "" `
val drawingTextPaint = dynamicItem.dynamicTextPaint[imageKey] ?: return
val textBitmap: Bitmap = drawTextCache[imageKey] ?: Bitmap.createBitmap(drawingBitmap.width, drawingBitmap.height, Bitmap.Config.ARGB_8888).applyCanvas {
val drawRect = Rect(0, 0, drawingBitmap.width, drawingBitmap.height)
val fontMetrics = drawingTextPaint.getFontMetrics()
val top = fontMetrics.top
val bottom = fontMetrics.bottom
val baseLineY = drawRect.centerY() - top / 2 - bottom / 2
drawingTextPaint.isAntiAlias = true
drawText(drawingText, drawRect.centerX().toFloat(), baseLineY, drawingTextPaint);
}
drawTextCache.put(imageKey, textBitmap)

Basically Kotlin can't smart cast textBitmap to a non-null Bitmap inside that lambda. You're probably getting the error on the Canvas(textBitmap) call, which can't take a null parameter, and the compiler can't guarantee textBitmap isn't null at that moment.
It's a limitation of lambdas referencing external vars which can be changed - I think it's because a lambda could potentially be run at some other time, so no guarantees can be made about what's happening to that external variable and whether something else could have modified it. I don't know the details, there's some chat here if you like.
The fix is pretty easy though, if all you're doing is creating a textBitmap variable and assigning something to it:
// Assign it as a result of the expression - no need to create a var first and keep
// changing the value, no need for a temporary null value, it can just be a val
val textBitmap: Bitmap? =
dynamicItem.dynamicText[imageKey]?.let { drawingText ->
dynamicItem.dynamicTextPaint[imageKey]?.let { drawingTextPaint ->
drawTextCache[imageKey]
?: Bitmap.createBitmap(drawingBitmap.width, drawingBitmap.height, Bitmap.Config.ARGB_8888).apply {
val drawRect = Rect(0, 0, drawingBitmap.width, drawingBitmap.height)
val textCanvas = Canvas(this)
drawingTextPaint.isAntiAlias = true
val fontMetrics = drawingTextPaint.getFontMetrics();
val top = fontMetrics.top
val bottom = fontMetrics.bottom
val baseLineY = drawRect.centerY() - top / 2 - bottom / 2
textCanvas.drawText(drawingText, drawRect.centerX().toFloat(), baseLineY, drawingTextPaint);
drawTextCache.put(imageKey, this)
}
}
}
I'd recommend breaking the bitmap creation part out into its own function for readability, and personally I'd avoid the nested lets (because it's not immediately obvious what you get in what situation) but that's a style choice

Related

FlexBoxLayout second row doesn't respect justify

I am using FlexboxLayout that I setup like:
layoutManager.flexDirection = FlexDirection.ROW
layoutManager.justifyContent = JustifyContent.FLEX_START
layoutManager.flexWrap = FlexWrap.WRAP
and the view holder's bind:
fun bind(n: Long) {
number.text = n.toString()
val lp: ViewGroup.LayoutParams = number.getLayoutParams()
if (lp is FlexboxLayoutManager.LayoutParams) {
(number.layoutParams as FlexboxLayoutManager.LayoutParams).flexGrow = 1.0f
}
}
For less than a single row this looks like what I want:
However it starts evenly spacing everything in the second row:
How can I make it continue to left-justify new items on the second row?

How can I make Spinner function stop crashing my app when trying to get the selected item?

I'm having Trouble getting string values from a spinner without it crashing my app , I want to make a choice according to the 2 pair of selected items in the when Function
val convertFrom = spnConvertFrom.selectedItem.toString()
val convertTo = spnConvertTo.selectedItem.toString()
val value = initialAmount.toString()
var valor2= value.toDouble()
when {
//Condicion
(convertFrom.equals("NIO") && convertTo.equals("USD")) -> currencyConverted.apply {
text = "Something"
}
else -> Toast.makeText(this, "Error", Toast.LENGTH_SHORT).show()
}
I've tried to switch the syntax a bit from = to .equals() (Same thing) Read something about it being null at the moment of the comparison but have no idea how to check it , I'm quite new to Kotlin and programing in android
There are several lines which could case an NPE
The lines where you are retrieving the selectedItem. You can handle those by adding the ? operator after the spinner instances
The line where you convert value to a Double. You can avoid the error by using toDoubleOrNulland providing a default value in case thatvalue` is not numeric
The line where you set the text of currencyConverted. If it is optional you should add the ? operator there as well:
val convertFrom = spnConvertFrom?.selectedItem.toString()
val convertTo = spnConvertTo?.selectedItem.toString()
val value = initialAmount.toString()
var valor2 = value.toDoubleOrNull() ?: 0.0
when {
//Condicion
(convertFrom.equals("NIO") && convertTo.equals("USD")) -> currencyConverted?.text = "Something"
else -> Toast.makeText(this, "Error", Toast.LENGTH_SHORT).show()
}

Why am I getting a race condition in multi-threading scala?

I am trying to parallelise a p-norm calculation over an array.
To achieve that I try the following, I understand I can solve this differently but I am interested in understanding where the race condition is occurring,
val toSum = Array(0,1,2,3,4,5,6)
// Calculate the sum over a segment of an array
def sumSegment(a: Array[Int], p:Double, s: Int, t: Int): Int = {
val res = {for (i <- s until t) yield scala.math.pow(a(i), p)}.reduceLeft(_ + _)
res.toInt
}
// Calculate the p-norm over an Array a
def parallelpNorm(a: Array[Int], p: Double): Double = {
var acc = 0L
// The worker who should calculate the sum over a slice of an array
class sumSegmenter(s: Int, t: Int) extends Thread {
override def run() {
// Calculate the sum over the slice
val subsum = sumSegment(a, p, s, t)
// Add the sum of the slice to the accumulator in a synchronized fashion
val x = new AnyRef{}
x.synchronized {
acc = acc + subsum
}
}
}
val split = a.size / 2
val seg_one = new sumSegmenter(0, split)
val seg_two = new sumSegmenter(split, a.size)
seg_one.start
seg_two.start
seg_one.join
seg_two.join
scala.math.pow(acc, 1.0 / p)
}
println(parallelpNorm(toSum, 2))
Expected output is 9.5393920142 but instead some runs give me 9.273618495495704 or even 2.23606797749979.
Any recommendations where the race condition could happen?
The problem has been explained in the previous answer, but a better way to avoid this race condition and improve performance is to use an AtomicInteger
// Calculate the p-norm over an Array a
def parallelpNorm(a: Array[Int], p: Double): Double = {
val acc = new AtomicInteger(0)
// The worker who should calculate the sum over a slice of an array
class sumSegmenter(s: Int, t: Int) extends Thread {
override def run() {
// Calculate the sum over the slice
val subsum = sumSegment(a, p, s, t)
// Add the sum of the slice to the accumulator in a synchronized fashion
acc.getAndAdd(subsum)
}
}
val split = a.length / 2
val seg_one = new sumSegmenter(0, split)
val seg_two = new sumSegmenter(split, a.length)
seg_one.start()
seg_two.start()
seg_one.join()
seg_two.join()
scala.math.pow(acc.get, 1.0 / p)
}
Modern processors can do atomic operations without blocking which can be much faster than explicit synchronisation. In my tests this runs twice as fast as the original code (with correct placement of x)
Move val x = new AnyRef{} outside sumSegmenter (that is, into parallelpNorm) -- the problem is that each thread is using its own mutex rather than sharing one.

Kotlin HashMap contain keys using an array

Is there any way to check a HashMap if it contains a certain set of keys which keys are given in an array. When I try something like the code below it returns false.
map.containsKey(arrayOf("2018-01-16"))
If I try the following code it works but I need to check for the keys and the numbers of keys that I need to search is not fixed.
map.containsKey("2018-01-16")
You can start from the keys themselves, and use the all function from the standard library:
val map = hashMapOf(...)
val keys = arrayOf("2018-01-16", "2018-01-17", "2018-01-18")
val containsAllKeys = keys.all { map.containsKey(it) }
If you do this a lot and want to have this functionality on the Map type, you can always add it as an extension:
fun <K, V> Map<K, V>.containsKeys(keys: Array<K>) = keys.all { this.containsKey(it) }
val containsAllKeys = map.containsKeys(arrayOf("2018-01-16", "2018-01-17"))
You might also want to overload the extension with another function that takes an Iterable<K> as the parameter.
Map has keys collection, which as every collection implements containsAll method, so you can use it to check whether the keys collection contains all of the keys:
map.keys.containsAll(keysArray.asList())
You could use ArrayList<T> as a key, since it's equals is different as the Array<T> one. Let's see this test:
class ExampleUnitTest {
#Test
fun arrayAsKeyTest() {
val hashMapOne = HashMap<Array<String>, Int>()
val stringKeysOne1 = arrayOf("a", "b")
hashMapOne.set(stringKeysOne1, 2)
val stringKeysOne2 = arrayOf("a", "b")
// NOT MATCH! As stringKeysOne1 == stringKeysOne2 is false
assertFalse(hashMapOne.containsKey(stringKeysOne2)) // NOT MATCH
val hashMapTwo = HashMap<ArrayList<String>, Int>()
val stringKeysTwo1 = arrayListOf("a", "b")
hashMapTwo.set(stringKeysTwo1, 2)
val stringKeysTwo2 = arrayListOf("a", "b")
// MATCH! As stringKeysTwo1 == stringKeysTwo2 is true (although stringKeysTwo1 === stringKeysTwo2 is false)
assertTrue(hashMapTwo.containsKey(stringKeysTwo2)) // MATCH
}
}

How to define mergeExpressions for a custom DeclarativeAggregate (in catalyst package)

I don't understand the general approach one takes to determine the mergeExpressions function for non-trivial aggregators.
The mergeExpresssions method for something like org.apache.spark.sql.catalyst.expressions.aggregate.Average is straightforward:
override lazy val mergeExpressions = Seq(
/* sum = */ sum.left + sum.right,
/* count = */ count.left + count.right
)
The mergeExpressions for CentralMomentAgg aggregators is a bit more involved.
What I would like to do is create a WeightedStddevSamp aggregator modeled after sparks CentralMomentAgg.
I almost have it working, but the weighted standard deviations that it produces are still a little off from what I compute by hand.
I'm having trouble debugging it because I do not understand how I can compute the exact logic for the mergeExpressions method.
Below is my code. The updateExpressions method is based on this weighted incremental algorithm, so I'm pretty sure that method is correct. I believe my problem is in the mergeExpressions method. Any hints would be appreciated.
abstract class WeightedCentralMomentAgg(child: Expression, weight: Expression) extends DeclarativeAggregate {
override def children: Seq[Expression] = Seq(child, weight)
override def nullable: Boolean = true
override def dataType: DataType = DoubleType
override def inputTypes: Seq[AbstractDataType] = Seq(DoubleType, DoubleType)
protected val wSum = AttributeReference("wSum", DoubleType, nullable = false)()
protected val mean = AttributeReference("mean", DoubleType, nullable = false)()
protected val s = AttributeReference("s", DoubleType, nullable = false)()
override val aggBufferAttributes = Seq(wSum, mean, s)
override val initialValues: Seq[Expression] = Array.fill(3)(Literal(0.0))
// See https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Weighted_incremental_algorithm
override val updateExpressions: Seq[Expression] = {
val newWSum = wSum + weight
val newMean = mean + (weight / newWSum) * (child - mean)
val newS = s + weight * (child - mean) * (child - newMean)
Seq(
If(IsNull(child), wSum, newWSum),
If(IsNull(child), mean, newMean),
If(IsNull(child), s, newS)
)
}
override val mergeExpressions: Seq[Expression] = {
val wSum1 = wSum.left
val wSum2 = wSum.right
val newWSum = wSum1 + wSum2
val delta = mean.right - mean.left
val deltaN = If(newWSum === Literal(0.0), Literal(0.0), delta / newWSum)
val newMean = mean.left + wSum1 / newWSum * delta // ???
val newS = s.left + s.right + wSum1 * wSum2 * delta * deltaN // ???
Seq(newWSum, newMean, newS)
}
}
// Compute the weighted sample standard deviation of a column
case class WeightedStddevSamp(child: Expression, weight: Expression)
extends WeightedCentralMomentAgg(child, weight) {
override val evaluateExpression: Expression = {
If(wSum === Literal(0.0), Literal.create(null, DoubleType),
If(wSum === Literal(1.0), Literal(Double.NaN),
Sqrt(s / wSum) ) )
}
override def prettyName: String = "wtd_stddev_samp"
}
For any hash aggregation, it's divided into four steps:
1) initialize the buffer (wSum, mean, s)
2) Within a partition, update the buffer of the key given all the input (call updateExpression for each of input)
3) After shuffling, merge all the buffer for same key using mergeExpression. wSum.left means wSum in left buffer, wSum.right means wSum in the other buffer
4) get the final result from buffer using valueExpression
I discovered how to write the mergeExpressions function for weighted standard deviation. I actually had it right, but then was using a population variance rather than a sample variance calculation in evaluateExpression. The implementation shown below gives the same result as above, but it easier to understand.
override val mergeExpressions: Seq[Expression] = {
val newN = n.left + n.right
val wSum1 = wSum.left
val wSum2 = wSum.right
val newWSum = wSum1 + wSum2
val delta = mean.right - mean.left
val deltaN = If(newWSum === Literal(0.0), Literal(0.0), delta / newWSum)
val newMean = mean.left + deltaN * wSum2
val newS = (((wSum1 * s.left) + (wSum2 * s.right)) / newWSum) + (wSum1 * wSum2 * deltaN * deltaN)
Seq(newN, newWSum, newMean, newS)
}
Here are some references
https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
http://www.itl.nist.gov/div898/software/dataplot/refman2/ch2/weightsd.pdf
http://people.ds.cam.ac.uk/fanf2/hermes/doc/antiforgery/stats.pdf
https://blog.cordiner.net/2010/06/16/calculating-variance-and-mean-with-mapreduce-python/ (This last one gave me the clue I needed for the mergeExpressions function)
Davies' post gives an outline of the approach, but for many non-trivial aggregators, I think the mergeExpressions function can be quite complex and involve advanced math to determine a correct and efficient solution. Fortunately, in this case, I found someone who had worked it out.
This solution matches what I work out by hand. Its important to note that the evaluateExpression needs to be modified slightly (to be s / ((n-1)*wSum/n)) if you want sample variance instead of population variance.

Resources