Elm Compiler running forever, computer just getting hot - switch-statement

I'm not sure what's causing this issue, but in a project, I'm building, the compiler is taking hours just to compile a module. The total size of my codebase is 352KB, but none of the modules are over 10KB large. I am using a Native port, but it's very trivial; I'm just fetching Date.now() with it.
Is there anything well-known that would cause the elm compiler to take forever to compile? I don't have many dependencies, but I'm using Html a lot. I would really appreciate any hints as to what would cause this.
Edit
So it turns out large case expressions will cause the optimizer to take a long time, as of 0.16. Here's the discussion on Elm-Discuss bringing up the issue, and a gist of the nasty case match.
I guess to be verbose and to keep a carrot out there, why would elm's compiler take this route for case-matching? What's the underlying machinery going on here? Why would the compiler take longer than an hour for optimizing 60+ pattern matches on a case statement?

Large case expressions will cause the optimizer to take a long time, as of 0.16. Here's the discussion on Elm-Discuss bringing up the issue, and a gist of the nasty case match.

Related

When to use "cold" built-in codegen attribute in Rust?

There isn't much information on this attribute in the reference document other than
The cold attribute suggests that the attributed function is unlikely to be called.
How does it work internally and when a Rust developer should use it?
It tells LLVM to mark a function as cold (i.e. not called often), which changes how the function is optimized such that calls to this code is potentially slower, and calls to non-cold code is potentially faster.
Mandatory disclaimer about performance tweaking:
You really should have some benchmarks in place before you start marking various bits of code as cold. You may have some ideas about whether something is in the hot path or not, but unless you test it, you can't know for sure.
FWIW, there's also the perma-unstable LLVM intrinsics likely and unlikely, which do a similar thing, but these have been known to actually hurt performance, even when used correctly, by preventing other optimizations from happening. Here's the RFC: https://github.com/rust-lang/rust/issues/26179
As always: benchmark, benchmark, benchmark! And then benchmark some more.

Which nodejs v8 flags for benchmarking?

For comparison of different libraries with the same functionality, we compare their execution time. This works great. However, there are v8 flags that impact execution time and skew results.
Some flags that are relevant are: --predictable, --always-opt, --no-opt, --minimal.
Question: Which v8 flags should typically be set for running a meaningful benchmarks? What are the tradeoffs?
Edit: The problem is that a benchmark typically runs the same code over and over to get a good average. This might lead to v8 optimizing code, which it would typically not optimize.
V8 developer here. You should definitely run benchmarks with the default configuration. It is the responsibility of the benchmark to be realistic. An unrealistic benchmark cannot be made meaningful with engine flags. (And yes, there are many many unrealistic and/or otherwise meaningless snippets of code out there that people call "benchmarks". Remember, if you can't measure a difference with a realistic benchmark, then any unmeasurable difference that might exist is irrelevant.)
In particular:
--predictable
Absolutely not. Detrimental to performance. Changes behavior in unrealistic ways. Meant for debugging certain things, and for helping fuzzers find reproducible test cases (at the expense of being somewhat unrealistic), not for anything related to performance testing.
--always-opt
Absolutely not. Contrary to what a naive reader of this flag's name might think, this does not improve performance, on the contrary; it mostly causes V8 to waste a bunch of CPU cycles on useless work. This flag is barely ever useful at all; it can sometimes flush out weird corner case bugs in the compilation pipeline, but most of the time it just creates pointless work for V8 developers by creating artificial situations that never occur in practice.
--no-opt
Absolutely not. Turns off all optimizations. Totally unrealistic.
--minimal
That's not a V8 flag I've ever heard of. So yeah, sure, pass it along, it won't do anything (beyond printing an "unknown flag" warning), so at least it won't break anything.
Using default flags seems like the best way to me, since that's what most people will use.

How to go on Browser exploit development?

I got very interested in browser exploitation, particularly in memory corruptions like UAF or type confusion vulnerabilities. Started learning some things, but can't understand some concepts.
First, I know tat fuzzing is one of the methods to find the bugs. Not sure how fuzzing can find those complex vulnerabilties.
Second, want to find out whether it is possible to find uaf bugs manually.
Third, can you please explain how uaf bugs occur in browsers in detailed manner? I know that uaf bug happens when freed memory is reused by code, and when attacker controlled data is placed in the freed memory, you get the code execution. But I can't understand how people generate all those HTML or POC codes to crash the software using the UAF bugs.
Fourth, what are type confusion vulnerabilties?
For web browsers :
1> fuzzing is the efficient way to find bugs, but using an existing fuzzer, in most of cases lead to finding existing vulnerabilities (already reported to editors).
In all times, after finding a bug, a manual work is needed to clean up the poc code
2> Spend time to make better your fuzzing strategy and generate a use cases (focus in allocation memory, feeing memory, copying references...) is the best manually work you can do to find a UAF Vuln.
3> you can find severals tutorial about the UAF on internet.
Good luck

Monitoring GHC activity

If GHC takes a long time to compile something, is there a way to find out what it's doing?
Firstly, it would be nice to know if I've actually crashed the compiler (i.e., put it into some sort of infinite loop somehow), or whether it's actually making progress, but just very slowly.
Secondly, it would be nice to know exactly what part of the compilation process GHC is having trouble with. Is it the parsing, or desugaring, or type-checking, or Core optimisation, or code generation, or...?
Is there some way to monitor what's going on? (Bearing in mind that if GHC is taking a long time, that probably means it's doing a lot of work, so if you ask for too much output it's going to be huge!)
GHC already tells you which modules it's trying to (re)compile. In my case, the problem is a single self-contained module. I'd like to know where GHC is getting stuck.
Following Daniel Fischer's comment, I tried running GHC with different verbosity options.
-v1: Produced a bit more output, but nothing during the main compilation step.
-v2: Tells you what step GHC is currently doing (parser, desugar, type check, simplifier, etc). This is pretty much what I actually wanted.
-v3: Appears to make the simplifier actually dump what it's doing to the console - bad idea while compiling 8MB of source code!
So it seems that -v2 is the place to start.
(In the specific case of the program that prompted this question, it seems GHC is spending forever in the type checking phase.)

Expression trees vs IL.Emit for runtime code specialization

I recently learned that it is possible to generate C# code at runtime and I would like to put this feature to use. I have code that does some very basic geometric calculations like computing line-plane intersections and I think I could gain some performance benefits by generating specialized code for some of the methods because many of the calculations are performed for the same plane or the same line over and over again. By specializing the code that computes the intersections I think I should be able to gain some performance benefits.
The problem is that I'm not sure where to begin. From reading a few blog posts and browsing MSDN documentation I've come across two possible strategies for generating code at runtime: Expression trees and IL.Emit. Using expression trees seems much easier because there is no need to learn anything about OpCodes and various other MSIL related intricacies but I'm not sure if expression trees are as fast as manually generated MSIL. So are there any suggestions on which method I should go with?
The performance of both is generally same, as expression trees internally are traversed and emitted as IL using the same underlying system functions that you would be using yourself. It is theoretically possible to emit a more efficient IL using low-level functions, but I doubt that there would be any practically important performance gain. That would depend on the task, but I have not come of any practical optimisation of emitted IL, compared to one emitted by expression trees.
I highly suggest getting the tool called ILSpy that reverse-compiles CLR assemblies. With that you can look at the code actually traversing the expression trees and actually emitting IL.
Finally, a caveat. I have used expression trees in a language parser, where function calls are bound to grammar rules that are compiled from a file at runtime. Compiled is a key here. For many problems I came across, when what you want to achieve is known at compile time, then you would not gain much performance by runtime code generation. Some CLR JIT optimizations might be also unavailable to dynamic code. This is only an opinion from my practice, and your domain would be different, but if performance is critical, I would rather look at native code, highly optimized libraries. Some of the work I have done would be snail slow if not using LAPACK/MKL. But that is only a piece of the advice not asked for, so take it with a grain of salt.
If I were in your situation, I would try alternatives from high level to low level, in increasing "needed time & effort" and decreasing reusability order, and I would stop as soon as the performance is good enough for the time being, i.e.:
first, I'd check to see if Math.NET, LAPACK or some similar numeric library already has similar functionality, or I can adapt/extend the code to my needs;
second, I'd try Expression Trees;
third, I'd check Roslyn Project (even though it is in prerelease version);
fourth, I'd think about writing common routines with unsafe C code;
[fifth, I'd think about quitting and starting a new career in a different profession :) ],
and only if none of these work out, would I be so hopeless to try emitting IL at run time.
But perhaps I'm biased against low level approaches; your expertise, experience and point of view might be different.

Resources