Improve MyEclipse's clean and build performance - myeclipse

MyEclipse Enterprise Workbench
Version: 10.7.1
Build id: 10.7.1-20130201
This is what myeclipse.ini has,
-vmargs
-Xmx2048m
-XX:MaxPermSize=1024m
-XX:ReservedCodeCacheSize=256m
-Dosgi.nls.warnings=ignore
I have Windows 7 (64 bit) with 8GB RAM.
Clean + Build takes 4.15 minutes and then building a WAR takes about 2.22 minutes. That's over 6 and half minutes. This is development machine and I need it to be fast. I upload a new build everyday and sometimes twice a day.
What areas I can look into? Assigning more memory to eclipse?
Thank you

Related

Docker containers freezing

I'm currently trying to deploy a node.js app on docker containers. I need to deploy 30 of them but they begin to have a weird behavior at some point, some of them freeze.
I am currently running Docker version for windows 18.03.0-ce, build 0520e24302, my computer specs (cpu and memory):
I5 4670 K
24 GB of ram
My docker default machine resource allocation is the following :
Allocated RAM : 10 Gb
Allocated vCPUs : 4
My node application is running on apline3.8 and node.js 11.4 and mostly do http requests every 2-3 seconds.
When i try to deploy 20 containers everything is running like a charm, my application do the job and i can see that there is an activity on every on my containers through the logs, activity stats.
The problem comes when i try to deploy more containers, more than 20, i can notice that some of the previously deployed containers start to stop their activities (0% cpu using, logs freezing). When everything is deployed (30 containers), Docker start to block the activity of some of them and unblock them at some point to block some others (blocking/unblocking is random). It seems to be sequential. I tried to wait and see what happened and the result is that some of the containers are able to poursue their activities and some others are stuck forever (still running but no more activity).
It's important to notice that i used the following resources restrictions on each of my containers :
MemoryReservation : 160mb
Memory soft limit : 160mb
NanoCPUs : 250000000 (0.25 cpus)
I had to increase my docker default machine resource allocation and decrease container's ressource allocation because it was using almost 100% of my cpu, maybe i did a mistake in my configuration. I tried to tweak those values, but no success i still have some containers freezing.
I'm kind of lost right know.
Any help would be appreciated even a little one, thank you in advance !

Why Java 8 allocates 1.07 gb MetaSpace, but uses only 81mb?

I am analyzing GC log from my application.
I wonder why my JVM allocated 1.07 gigabyte for Meta Space, but used only 81 megabytes.
I use jdk8_8.91.14 (Oracle JDK) without any additional settings for memory.
Those numbers come from analyzing GC log file (-XX:+PrintGCDetails ) with http://gceasy.io/
All used metatada was allocated shorty after application was started, and it stays that way for whole applicaton lifetime.
Why JVM defaults are so wastefull when in comes to metadata?
It seems, that in my case I just waste 1GB of memory.
How to safely tune Metadata, so it starts small (like 52Mb), grows only when needed, and grows in small chunks?
I am running application on (Virtual Machine), CentOS Linux release 7.2.1511 (Core).
Inside that VM , I have docker with Ubuntu 14.04.4 LTS

CPU usage 350% while running DSE 4.x

VM configuration is - CentOS 6.2, 64-bit, 8 GB RAM Quad Core CPU.
There is aboug 1 GB of data and possibly 20 tables in the C* setup I have. When I try to start DSE after rebooting the VM, it takes a long time to start. So I ran top command and found that the CPU usage was shooting to 350%
Please see the screenshot attached.
Requesting pointers from experts here how can the CPU usage shoot up more than 100% or does the number indicate something else?

CentOs 6.5 final and Datastax Enterprise 4

I have previously setup cassandra using the datastax community edition and have tried to move to Enterprise 4.
I've tried installing via the optscenter web interface and had it 'Start Errored: Timed out waiting for Cassandra to start.' on all 4 nodes.
I've also tried the manual approach outlined on the site. In this case just as the other it launches the dse service 'successfully'. Output.log and system.logs show the classpath as the last entry and no errors in them at all.
Java: Jre-1.7.0_51
Os: centos 6.5 Final
Vagrant box: https://github.com/2creatives/vagrant-centos/releases/download/v6.5.1/centos65-x86_64-20131205.box
My suspicion would be that your VM does not have enough memory. If physical addressable memory is smaller than the MAX_HEAP_SIZE configured in resources/cassandra/conf/cassandra-env.sh jna will go bonkers. You want to have at least 4GB of memory, or change the value in cassandra-env.sh.

Running multiple instances of a jar at the same time: memory issue

Right now I am running multiple instances of a jar (code written in scala) at the same time on a cluster with 24 cores and 64G memory, with Ubuntu 11.04 (GNU/Linux 2.6.38-15-generic x86_64). I observe an issue of heavy memory usage, which is super-linear to the number of instances I run. To be more specific, here is what I am doing
Code in scala and use sbt to pack into a jar.
Login to the cluster, use screen to open a new screen session.
Open multiple windows in this screen.
In each window, run java -cp myjar.jar main.scala.MyClass
What I observe is that, say when I only run 7 instances, about 10G memory is used, and everything is fine. Now I run 14 instances. Memory is quickly eaten up and all 64G is occupied, and then the machine slows down dramatically and it is even difficult to log in. By monitoring the machine through htop, I can see that only a few cores are running at a time. Can anyone tell me what is happening to my program and how to fix it so that I am able to use the computational resources efficiently? Thanks!
To use the computational resources efficiently, you would have to start one jar which starts multiple threads in one JVM. If you start 14 instances of the same jar, you have 14 isolated JVMs running.
Get ids of all java processes using jps
Get the most heavy weight process using jmap
Get heap dump of that process using the same jmap.
Analyze heap usage with jhat
Alternatively you could copy dump locally and explore with tools like Eclipse Memory Analyzer Open Source Project
If after solving this issue you totally loved these shell like tools(as I do) go through complete list of java troubleshooting tools - it will save you a lot of time so you could go to pub earlier instead of staying late and debugging memory/cpu issues

Resources