In a machine I observe that org.gradle.launcher.daemon.bootstrap.GradleDaemon 4.1 is running running and having lots of threads.
It seems to be related to some build but I don't understand why it seems to stay alive with so many threads and if the settings of
-XX:MaxPermSize -XX:+HeapDumpOnOutOfMemoryError -Xms1024m -Xmx2048
are programmer configured or picked up based on some detection
The title question and some of the additional ones are answered by the Gradle documentation itself.
In short, the Gradle daemon enables using a hot JVM for running Gradle builds. And the parallel feature of Gradle means it uses a number of threads based on heuristics derived from the specification of the machine running it.
Related
I know of the existence of nvvp and nvprof, of course, but for various reasons nvprof does not want to work with my app that involves lots of shared libraries. nvidia-smi can hook into the driver to find out what's running, but I cannot find a nice way to get nvprof to attach to a running process.
There is a flag --profile-all-processes which does actually give me a message "NVPROF is profiling process 12345", but nothing further prints out. I am using CUDA 8.
How can I get a detailed performance breakdown of my CUDA kernels in this situation?
As comments suggest, you simply have to make sure to start the CUDA profiler (now it's NSight Systems or NSight Compute, no longer nvprof) before the processes you want to profile. You could, for example, configure it to run on system startup.
Your inability to profile your application has nothing to do with it being an "app that involves lots of shared libraries" - the profiling tools profile such applications just fine.
I've been looking for the process attach solution too but found no existing tool.
A possible direction is to use lower CUDA API to build a tool or integrate to your tool. See cupti: https://docs.nvidia.com/cupti/r_main.html#r_dynamic_detach
I recently upgraded from Jenkins 1.6 to 2.5. After I did this, I noticed very high CPU usage, sometimes over 300% (there are only 4 cores, so I don't think it could go over 400%). I'm not sure where to begin debugging this, but here's a thread dump and some screenshots from top/htop
htop
top:
As it turned out, my issue was that several jobs had thousands of old builds. This was fine in Jenkins 1.6 but it's a problem in 2.5 (I guess maybe Jenkins tries to load all the builds into memory when you view the job overview page). To fix it, I just deleted most of the old builds from the problem jobs using this strategy and then reloaded jenkins. Worked like a charm!
I also set the "discard old builds" plugin to keep only the 50 most recent builds, to prevent this from happening again.
Whenever a request comes in, Jenkins will spawn some threads to serve the request. After upgrading Jenkins, it might have invoked at high throttle at that time. Plz check the CPU and memory usage of Jenkins server while the following scenarios :
Jenkins is idle and no other apps are running on the server.
Scheduled a build and no other apps are running on the server.
And compare the behaviors which could help you out to determine whether Jenkins or running jenkins in parallel with other apps are really making trouble.
As #vlp said, try to monitor the jenkins application via JVisualVM with Jstad configuration to hook in. Refer this link to Configure JvisualVM with Jstad.
I have noticed a couple of reasons for abnormal CPU usage with my Jenkins install on Windows 7 Ultimate.
I had recently upgraded from v2.138 to v2.140 plus added a few additional plugins. I started noticing a problem with the Jenkins java executable taking up to 60% of my CPU time every time a job would trigger. None of the jobs were CPU bound, just grabbing data from external servers, so it didn't make any sense. It was fixed with a simple restart of the Jenkins service. I assume the upgrade just didn't finish cleanly.
Java Garbage Collection was throwing errors and hogging the CPU when running with the default memory settings. It was probably overkill, but I went wild and upped the Java Heap Space for Jenkins from the default 256mb to 4gb; which solved this problem for me.See this solution for instructions:
https://stackoverflow.com/a/8122566/4479786
2.5 seems to be a development release, while 1.6 is their Long Term Support version. Thus it seems logical that you should expect some regressions when using the bleeding edge version. The bounty on this question is proof that other users are experiencing this as well. The solution is to report a bug on the Jenkins bug tracker. You can temporarily downgrade to the known good version for now.
Try passwing following argument to jenkins:
-Dhudson.util.AtomicFileWriter.DISABLE_FORCED_FLUSH=true
as mentioned here: https://issues.jenkins-ci.org/browse/JENKINS-52150
It is just a curious question. Why Android Studio, Intellij use Server mode instead Client as default. Should I switch to client mode to improve AS's startup time?
Real differences between "java -server" and "java -client"?
The server JVM performs much deeper optimizations for the code. If you start and quit Android Studio hundreds of times each day, then switching to the client JVM may actually give you better performance. Most people don't do that, and are more concerned about better performance of the IDE in the long run than about startup time. Therefore, the server VM is used by default.
I have big enterprise JAVA application , running on several machines under tomcat7.
There are different performance problems such slow response , server hangs etc.
I want to try to play with different params like maxThread , maxConnection ,acceptCount and so on .
But before change them, how can I check that I run out of connections for example and I need to increase it ? Or everything else , like acceptCount that should be increased ?
Typically, Apache Tomcat performance issues are with the underlying JavaVM configuration, in my experience those are mainly with the size of the permGen, and other memory settings. I have been able to troubleshoot quite a few of them using VisualVM, which visualizes a lot of the JVM memory ops. Would also highly recommend JMeter.
IMHO maxThread and other Tomcat-specific parameters have rarely been the source of application performance issues, but it's the JVM settings where most issues are.
Start with minimum of these settings:
-Xms1024M -Xmx2048m -XX:MaxPermSize=1024m
I would recommend to find the problem before starting "fixing" things.
There are several applications to monitor your servers and check where the problems are. You can try appdynamics, newrelic, ruxit, or any other application monitoring product. (Some have free version offers that comes handy)
Then you search for your bottlenecks, they can be anywhere, server, database, network, jvm, ... depending on your application and your architecture.
And once you find the problem, you can start fixing it.
Good luck!
I have a decent sized GWT (Google Web Toolkit) project that is built using Apache Maven. The build process involves generating 8 rpms and 2 wars.
I'm trying to build the project on a remote virtual server, running CentOS 5.2 as a guest OS. Since the guest OS can't use swap space, I am having to allocate a huge amount of memory to the box for it to build, otherwise I get a java could not allocate memory error (error=12). The build fails if there is under 7GB free. I suspect that most of this 7GB is never used, but is allocated for some reason.
At the end of the build the output reads: [INFO] Final Memory: 178M/553M
I have MAVEN_OPTS set to -Xms256m -Xmx1024M
I'm not sure how to make the maven build use less memory. Any suggestions are much appreciated.
Note that forking plugins like the maven gwt plugin (and maven surefire) uses memory that is "outside" the total that is reported by the maven execution. I would recommend corrolating OS-level process sizes with the output from "jps -lv" to find out which fork is stealing all your memory.
If, for instance, for some reason the forked process does not terminate it would get very crowded, very quickly.
That memory indicates it only ever needed a max of 553M, so the setting in MAVEN_OPTS is already above what you need. Are you saying you want to use less than that, or are you currently getting an error?