Is there a way to prevent java bytecode conditions from being reversed? - java-bytecode-asm

I need to map java bytecode to the java source level, especially conditionals. However, as java bytecode conditions are revered sometimes ( not all the time though), it's hard to achieve the goal. Is there a way to tell the java compiler to stop reversing the conditionals?

Related

Driver code in one language and executors in different languages

How can I use different programming languages to define the logic of my executors, than what I use for the driver? Is this possible at all?
E.g.: I would write the driver in Scala, then call different functions written in Java, Python for the distributed processing of the dataset.
You could, but only under certain circumstances, and with some work.
It should be possible to use the code generation feature of SparkSQL/DataSet to implement methods in other languages and call them through JNI or other interfaces.
Furthermore, the generated code is Java code, so technically you are already running Java code, independently of which language you use to program the Spark program.
As far as I know, it's also possible to use Python UDFs inside a Spark program written in Java or Scala.
With the RDD API it should also be possible to call libraries in other programming languages - with Scala-Java mixes being trivial to implement, and non-JVM languages needing the appropriate bridging logic.
There's - at least in current versions of Spark - a performance penalty to pay, for getting data out of the JVM and back into it, so I would use this sparingly, and only when you have weighed the performance pros and cons carefully.

What is the difference between AspectJ And ASM?

As I understand, the 2 frameworks are both static that injects monitor codes into class codes. So, what is the difference?
ASM is a framework/library which provides you an API to manipulate existing bytecode and/or generate new bytecode easily.
AspectJ, on the other hand is a language extension on top of the Java language with it's own syntax, specifically designed to extend the capabilities of the Java runtime with aspect oriented programming concepts. It includes a compiler/weaver, which can either be run at compile time or run-time.
They're similar in the sense that both achieve their goals by bytecode manipulation of existing bytecode and/or generating new bytecode.
ASM is more general in the sense that it doesn't have an opinion about how you would want to modify existing bytecode, it just gives you an API and you can do whatever you want with it. AspectJ, on the other hand, is more specific, more narrow scoped, it only supports a few predefined AOP constructs, but it gives you an interface (the aspectj language) which is much easier to work with if you can fit within those constructs it provides you with.
For most use-cases I've seen, AspectJ is more than enough, but in those rare cases where it wouldn't, ASM can be a good alternative, but you'll need more programming work to achieve similar results.

Kotlin for game dev

Background:
I'm always searching for a language to replace Java for game development. Kotlin looks promising with a good IDE support and Java interop. But one of the FPS killers for a game (on Android especially) is GC usage. So, some libraries (like libgdx) are using pools of objects, custom collections and other tricks to avoid frequent GC run. For Java that can be done in a clear way. Some other JVM languages espesially with functional support using a lot of GC by it's nature, so it is hard to avoid.
Questions:
Does Kotlin creates any invisible GC overhead in comparison to Java?
Which features of Kotlin is better to avoid to have less GC work?
You can write Kotlin Code for the JVM which causes the same allocations than the Java corresponding logic. In both cases you have to carefully check if a library call allocates new memory on the heap, or not. Using Kotlin in combination with LibGDX doesn't introduce any invisible GC overhead. It's an effective way and works well (especially with the ktx extension.
But there are Kotlin language features which may help you to write your code with fewer allocations.
Singletons are a language feature. (Object declarations, companion object )
You can create wrapper classes for primitive types which compile to primitives. But you get the power of type safety and rich domain models (Inline classes).
With the combination of Operator overloading and Inline Functions you can build nice APIs which modify objects without allocating new ones. (Example: Allocation-free Vectorial operations using custom operators)
If you use any kind of dependency injection mechanism or object pooling to connect your game logic and reuse objects, then Reified type parameters may help to use it in a very elegant an short way. You can skip a class as type parameter, if the compiler knows the actual type.
But there is also another option which indeed gives you a different behavior in memory management. Thanks to Kotlin Multiplatform, you can write your game logic as Kotlin common module and cross compile it to native code or to Javascript.
I did this in a sample Game project Candy Crush Clone. It works with Korge a Modern Multiplatform Game Engine for Kotlin. The game runs on the JVM, as HTML web app and as Native binary in Win, Linux, Mac, Android or IOS.
The native compiled code has its own simpler garbage collection and can run faster. So the speed-increase and the different memory management may give you the power reserve to bother even less with the GC.
In conclusion I can recommend Kotlin for Game dev, also for GC critical scenarios. In my projects I tend to create more classes and allocate more memory when I write Kotlin code. But this is a question of programming style, not a technical one.
As a rule of thumb, Kotlin generates bytecode as close as possible to the one generated by Java. So, for example, if you use a function as a value, an inner class will be created, like in Java, but no more. There are also some optimization tricks like IntArray and inline to perform even better.
And as #Peter-Lawrey said, it's always a better idea to measure the values for your specific case.
Technically, your questions comparing Kotlin to Java are moot, they will perform the same. But Kotlin will be a better development experience.
If Java is good for writing Games, then Kotlin would only better due to developer productivity.
Note: the gaming library LWJGL 3 uses Kotlin in part, with GitHub stats showing 67.3% of the code being Kotlin (template module looks to be mostly Kotlin). So asking people who work with LWJGL will give you the best answer to this question since they have a lot of experience in this area.

When/why would I want to use Groovy's #CompileStatic?

I've read this: http://docs.groovy-lang.org/latest/html/gapi/groovy/transform/CompileStatic.html, and this: Should I use Groovy's #CompileStatic if I'm also using Java 7, and understand there are certainly performance improvements to be had but is that it? I don't understand exactly what #CompileStatic does.
Are there certain classes on which adding #CompileStatic is a no-brainer? Where would I not want it?
To cite my part of my answer to Should I use Groovy's #CompileStatic if I'm also using Java 7:
While faster than normal Groovy, it can compile only a subset of Groovy and behaves a bit different. Especially all the dynamic features are not available anymore.
All of the MOP will be bypassed. Builders won't work in general, some have extensions to the compiler to allow them passing though. Also methods are selected at compile time using static types, while Groovy normally uses the methods available at runtime and the runtime types. This can result in different methods being called.
Of course #CompileStatic also provides some security, since it is the tasks of a compiler to verify programs at runtime. But since static information is doomed to be incomplete, there can never be a 100% security.
So where is it a no-brainer... well... POGOs for example, since they don't usually contain all that much code. And of course for classes ported from Java to Groovy by copy&paste.
Where would I want it? Well, currently probably on Android, since there the code size has an impact and the static compiled code is more compact. Otherwise I personally am fine with not using #CompileStatic at all. This is more a matter of taste. In some cases there is a performance improvement for tight loops, but that requires you going and identifying by profiling your application first
Turns out that #CompileStatic can be useful when AOT compiling groovy programs - for example with GraalVM native-image tool. The native-image MethodHandle support is limited to cases where the MethodHandle object is a compile time constant. By using compiler configuration like:
import groovy.transform.CompileStatic
withConfig(configuration) {
ast(CompileStatic)
}
one can eliminate the dynamic MethodHandle instances in the generated bytecode and let the GraalVM ahead-of-time compilation succeed.

Bytecode Vs. Interpreted

I remember a professor once saying that interpreted code was about 10 times slower than compiled. What's the speed difference between interpreted and bytecode? (assuming that the bytecode isn't JIT compiled)
I ask because some folks have been kicking around the idea of compiling vim script into bytecode and I just wonder what kind of performance boost that will get.
When you compile things down to bytecode, you have the opportunity to first perform a bunch of expensive high-level optimizations. You design the byte-code to be very easily compiled to machine code and run all the optimizations and flow analysis ahead of time.
The speed-increase is thus potentially quite substantial - not only do you skip the whole lexing/parsing stages at runtime, but you also have more opportunity to apply optimizations and generate better machine code.
You could see a pretty good boost. However, there are a lot of factors. You can't just say that compiled code is always about 10 times faster than interpreted code, or that bytecode is n times faster than interpreted code.
Factors include the complexity and verbosity of the language for example. If a keyword in the language is several characters, and the bytecode is one, it should be quite a bit faster to load the bytecode, and jump to the routine that handles that bytecode, than it is to read the keyword string, then figure out where to go. But, if you're interpreting one of the exotic languages that has a one-byte keyword, the difference might be less noticeable.
I've seen this performance boost in practice, so it might worth it for you. Besides, it's fun to write such a thing, gives you a feel for how language interpreters and compilers work, and that will make you a better coder.
Are there actually any mainstream "interpreters" these days that don't actually compile their code? (Either to bytecode or something similar.)
For instance, when you use use a Perl program directly from its source code, the first thing it does is compile the source into a syntax tree, which it then optimizes and uses to execute the program. In normal situations the time spent compiling is tiny compared to the time actually running the program.
Sticking to this example, obviously Perl cannot be faster than well-optimized C code, as it is written in C. In practice, for most things you would normally do with Perl (like text processing), it will be as fast as you could reasonably code it in C, and orders of magnitude easier to write. On the other hand, I certainly wouldn't try to write a high performance math routine directly in Perl.
Also, a lot of "classic" interpreters also include the lex/parse phase along with execution.
For example, consider executing a Python script. When you do that, you have all the costs associated with converting the program text in to the internal interpreter data structures, which are then executed.
Now contrast that with executing a compiled Python script, a .pyc file. Here, the lex and parse phase is done, and you have just the runtime of the inner interpreter.
But if you consider, say, a classic BASIC interpreter, these typically never store the raw text, rather they store a tokenized form and recreate the program text when you do "LIST". Here the byte code is much cruder (you don't really have a virtual machine here), but your execution gets to skip some of the text processing. That's all done when you enter the line and hit ENTER.
It is according to your virtual machine. Some of your faster virtual machines(JVM) are approaching the speed of C code. So how fast is your interpreted code running compared to C?
Don't think that if you convert your interpreted code into ByteCode it will run as fast a Java(near C speeds), there has been years of performance boosting going on, but you should see significant speed boost.
Emacs has been ported into bytecode with increased performance. Might be worth a look to you.
I've never noticed a Vim script that was slow enough to notice. Assuming a script primarily calls built-in, native-code, operations (regexes, block operations, etc) that are implemented in the editor's core, even a 10x speed-up of the 'glue logic' in scripting would be insignificant.
Still, profiling is the only way to be really sure.

Resources