Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given that 90% of time, developers are working with debug builds, why, exactly, is deploying release builds preferable?
Size, speed, and memory use.
Your users aren't going to need all the debugging crud you work with, so stripping the debug symbols reduces binary size and memory consumption (and therefore increases speed, as less time is spent loading the program's components into RAM).
When your application crashes, you usually want a traceback and the details. Your users really couldn't care less about that.
There is nothing wrong with deploying a debug build. People commonly don't because non-debug builds tend to be more efficient (i.e. the assertion code is removed and the compiler doesn't insert tracing/debugger info in the object code).
With Java 1.1, the debug builds could be much slower and they needed more disk space (at that time, disk with 120 megabytes were huge - try to fit your home directory on such a small device...).
Today, both are not an issue anymore. The Java runtime ignored debug symbols and with the JIT, the code is optimized at runtime so the compile time optimizations don't mean that much anymore.
One big advantage of debug builds is that the code in production is exactly what you tested.
That entirely depends on the build configuration, which may have compilation optimisations when doing a "release" build. There are other considerations, dependant on language; for example, when compiling an application using Visual C++, you can compile against debug versions of the C RunTime (CRT), which you are not permitted to redistribute.
I think the key here is determine what exactly makes a build a "debug build". Mainly production builds are more optimized (memory usage, performance, etc.), and more care has been taken to build it in such a way that makes it easier to deploy and maintain. One main thing that I run into the most is that debug builds have a lot of unneeded logging that results in a pretty dramatic performance hit.
Apart from that, there is no real reason why debug builds shouldn't be deployed to production environments.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Can some one explain the architecture of GO lang, Is it faster compared to Nodejs & if so what makes it faster and Go is developed using C/C++, So, does GO beats out in performance when compared to C/C++ and is the only difference between C/C++ & Go is all about more functions which makes developer easy to code using GO?
Note that Go 1.5 will feature its compiler, assembler, linker, and runtime written entirely in Go.
The goal is to have Go written entirely in Go and to rid the codebase of any C code. The only exception to the C code is for Cgo.
(See Go 1.5 Bootstrap plan)
The speed is more about about the native code generated, and the simplicity of the language (no genericity means less dynamic data to keep track of)
Go hasn't been always fast: "Why is go language so slow?".
It improves incrementally, notably on the garbage collection side, and stack management side.
Uvelichitel mentions below x64 Ubuntu : Intel® Q6600® one core -- Computer Language Benchmarks Game
As for "Golang Architecture", this doesn't really apply here (as detailed in this answer):
Go has no VM like the Java JVM. It compiles straight to metal like c/c++.
The Go 1.3 Linker overhaul mentions:
The current linker performs two separable tasks.
First, it translates an input stream of pseudo-instructions into executable code and data blocks, along with a list of relocations.
Second, it deletes dead code, merges what’s left into a single image, resolves relocations, and generates a few whole-program data structures such as the runtime symbol table.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Would it be possible to make Shake reactive, using inotify (or whatever git-annex and Yesod use) so that if ever the filesystem changes in such a way to imply that rule should execute, it does so at the earliest opportunity?
The author of Shake, Neil Mitchell, answered this by saying:
There are a few ways to approach this:
You could just rerun Shake every time it detects something has
changed. Shake is highly optimised for fast rebuilds, if the change
requires doing a compile, then the time for Shake to figure out what
to rebuild is probably minimal. Requires no changes to Shake.
There are certain things Shake does on startup, like reading the
Shake database. If there is demand, and that turns out to be
noticeable in time, I would happily provide a rerun Shake cheaply API
of some sort - it's not that difficult to do.
When Shake does do a rebuild-check, the most expensive thing it
does is checking file modification times. If the inotify layer gave a
list of the files that had changed I could only recheck things that
had actually changed. For a massive project you're likely to see ~1s
checking modification times, so it probably buys you a little, and
isn't too hard to implement.
If Shake is actively building, and then something changes, you
could throw an exception, kill whatever is being built, and restart
Shake. Shake has been thoroughly tested with exceptions being thrown
at it, and does the right thing. I know at least one person uses Shake
in this way.
Finally, if Shake is actively building, you could dynamically
terminate just those rules whose inputs have changed and go again.
Shake could support this model, but it would be a reasonable amount of
work, and require re-engineering some pieces. That would be the full
reactive model, but I suspect it only starts to be a benefit when you
have a massive number of files and a few files are changing almost
continuously but most files aren't.
We also determined that combining Shake with a utility like Hobbes (also on Hackage) can make it possible to do reactive builds.
I'm trying to hook up a real-time crash reporting service like airbrake, bugsense or TestFlight's SDK but I'm wondering if the crash reports that are generated from crashes are any good when compiling your MonoTouch project using the LLVM compiler.
When you're configuring an iPhone build if you go to the proj settings > iPhone Build > Advanced tab it says "Experimental, not compatible with debug mode". This is why I'm questioning the stacktrace from the crash reports.
There are several points to consider here:
a) enabling debug on your builds:
tells the compilers to emit debugging symbols (e.g. the .mdb files) which includes a lot of information (variables names, scopes, lines numbers...);
add extra debugging code to your application (e.g. to connect the application, on the device, to the debugger, on your Mac);
tells the compiler (e.g. AOT) to disable some optimizations (that would make debugging harder);
This result in larger, slower applications that contains a lot of data you don't want people to access (e.g. if you fear reverse engineering). For releases it's a no win situation for everyone.
b) using the LLVM compiler won't work with debug mode. It's generally not an issue since, when debugging, you'll likely want the build process to be as fast as possible (and LLVM is slower to build). A problematic case is if your bug shows up only on LLVM builds.
c) The availability of managed stack traces do not requires debug symbols. They are built from the metadata available in your .dll and .exe files. But, when debugging symbols are available, the stack trace will include the line numbers and filenames for each stack frame.
d) I never used the tools you mentioned, but I do believe them to be useful :-) You might wish to ask specific questions about them (wrt MonoTouch). Otherwise I think it's worth testing to see if the level of details differ (and if the extra details are of any help to you). IMO I doubt it will bring you more than the actual 'cost' of shipping 'debug' builds.
first create a "crash me" feature in your application;
then compare reported results from non-LLVM "release" and "debug" builds;
next compare the non-LLVM "release" and LLVM "release" builds;
It be nice to post your experience of the above: here, monotouch mailing-list and/or a blog entry :-)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am eager to find a tool that allows me to trace the behaviour of the pthreads in a program I am working on. I am aware that there where similar questions asked before, see here and here .
As it turns out, the tools that are recommended are not what I need or it seems impossible to get them to to work on my machine. It is Debian 6, 32-bit all over on x86 architecture.
EZtrace in combination with ViTE seems to be what I am looking for. But unfortunately I cannot get it to work. (Tools wont compile in some versions, other versions crash, never really saw it work. Different Computer (Ubuntu 10.04 x64) shows other bugs)
Is there a tracing solution that is capable of visualizing the behavior of a pthreaded program on Linux, that is actually known to work?
Valgrind's Tool Suite [ Linux and OS/X ]
I've used Memcheck and it works as advertised. I haven't used the visualization tools yet however. Not sure if the output of Helgrind can be adapted for viewing with kCachegrind.
The Valgrind distribution includes four [sic] useful debugging and profiling tools:
Memcheck detects memory-management problems, and is aimed primarily at C and C++ programs. When a program is run under Memcheck's supervision, all reads and writes of memory are checked, and calls to malloc/new/free/delete are intercepted. As a result, Memcheck can detect if your program:
Accesses memory it shouldn't ...
Uses uninitialised values in dangerous ways.
Leaks memory.
Does bad frees of heap blocks (double frees, mismatched frees).
Passes overlapping source and destination memory blocks to memcpy() and related functions.
Memcheck reports these errors as soon as they occur, giving the source line number at which it occurred...
Cachegrind is a cache profiler. It performs detailed simulation of the I1, D1 and L2 caches in your CPU and so can accurately pinpoint the sources of cache misses in your code...
Callgrind, by Josef Weidendorfer, is an extension to Cachegrind. It provides all the information that Cachegrind does, plus extra information about callgraphs. It was folded into the main Valgrind distribution in version 3.2.0. Available separately is an amazing visualisation tool, KCachegrind, which gives a much better overview of the data that Callgrind collects; it can also be used to visualise Cachegrind's output.
Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations...
Helgrind is a thread debugger which finds data races in multithreaded programs. It looks for memory locations which are accessed by more than one (POSIX p-)thread, but for which no consistently used (pthread_mutex_) lock can be found. Such locations are indicative of missing synchronisation between threads, and could cause hard-to-find timing-dependent problems. It is useful for any program that uses pthreads. It is a somewhat experimental tool, so your feedback is especially welcome here.
check this
http://lttng.org/ (Linux Trace Toolkit)
HTH
DIVINE can draw a graph of the state space and check for violated assertions.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
An important part of mobile development, especially when you are talking about mobile games, is dealing with the application size restrictions. Some devices enforce their own size limits, while all the carriers have their own size requirements for applications to be released in their deck space.
My question is, is there a java obfuscation application that gets better size reduction results than the other java obfuscation apps that are out there?
I use Proguard because it is the default Netbeans obfuscator and you can get fairly good size reduction results out of it (by the way, the version of Proguard that comes with Netbeans 6.1 is 3.7. There are newer versions that get even better results, I recommend getting the latest). But, I'm interested in what else is out there and whether they do a better job than Proguard.
My Conclusion:
I appreciate the responses. Carlos, your response was enough to convince me that Proguard is the current way to go. I could still be convinced otherwise, but don't feel bad with my current setup.
I have also had some issues with proguard obfuscating and running on some phones, but not too many. I was always able to fix the problem by not using the Proguard argument "-overloadaggressively". Just something to keep in mind if you are experiencing odd behavior related to obfuscating.
Thanks again.
I also prefer ProGuard for both it's size reduction and breadth of obfuscation - see http://proguard.sourceforge.net/. I don't necessarily have size constraints other than download speeds, but haven't found anything that shrinks further.
When it comes to J2ME and obfuscation it pays to be a bit cautious. Proguard is the best choice because of the many years it has been in development, and the many bugfixes that it has received. I remember the version transition between 2.X and 3.X and how it broke many of my (then) employer builds. This happened because some of the changes that enabled more size savings also broke the class files in subtle ways in some handsets, while being perfectly fine in others and on desktop JVMs.
Nowadays Proguard 3.11 is the safest choice in obfuscators. 4.XX is probably fine if you don't have to support very old handsets.
Strange that no one remembered that ProGuard can not just shrink and obfuscate the code, but optimize as well. The last versions allow to specify several passes for optimization (by default there is a single pass), I may specify, say, 9 passes.
After I decompile my classes I can hardly recognise them, ProGuard restructures a lot of method calls. All it takes is just a bit of tweaking this wonderful app. So I think ProGuard is the way to go, just don't forget to adjust it a little. It also has a very nice manual.