I tried installing devstack OpenStack-liberty on Ubuntu 14.04 Using VM Virtualbox. I want to integrate nova,swift,cinder along with OpenStack. I have enabled services for cinder in localrc file. After trying so many times i.e stacking(run ./stack.sh) and unstacking ,I ended up getting the same error:
'c-api did not start'
The problem was due to resources. I was using 4GB RAM which was not sufficient. Few APIs consume more RAM while starting. Openstack Installation console waits for a while and expects APIs to start, which wasn't happening in my case due to less dedicated RAM since I was using 4GB of RAM in which only 2.5GB RAM was given to my VM.
After struggling few days I got to know the issue and upgraded my system's RAM to 8GB, and it worked!
So I suggest people who want to work with Openstack-Swift and Openstack-Neutron should dedicate minimum of 5.5GB to VM!
Related
I have installed OIM [11GR2 PS2] and OAM [R2PS2] in my PC, but system hangs with 12Gb of RAM.
I have I3 5th generation processor along with 12 Gb of RAM.I use win10 as my basic OS; however for installing oracle product I use VM where I have installed win7[ultimate version ].
Though as per oracle pre-requisite chart, 8GB of RAM is enough to run single instance of OIM / OAM, however I have allocated almost 10.5 GB of RAM to those VM's running OIM / OAM, but each time, after admin server start, whenever I try to start any of the manage server, the CPU consumption reaches 100% and everything hangs, I had to shut down my VM.
Though the question is a basic one, but have not found exact answer anywhere. Looking for help/suggestion .
The memory requirement of 8 GB is bare minimum and 16 GB is recommended. See this 11gR2 memory requirements and 11gR2 requirements. Also Refer to 3.1 Minimum Memory Requirements for Oracle Identity and Access Management and the section 3.3 Examples: Determining Memory Requirements for an Oracle Identity and Access Management Production Environment. (Even though it is mentioned Production but is valid for your instance since you have one VM, which is hosting all the components, inlcuding WebLogic server, OIM server, SOA server and also OAM server.
Here is the estimate of RAM from the above Oracle 11gR2 reference
To estimate the suggested memory requirements, you could use the following formula:
4 GB for the operating system and other software
+ 4 GB for the Administration Server
+ 8 GB for the two Managed Servers (OIM Server and SOA Server)
-----------------------------------------------------------
16 GB
With 4 GB for OS and 4 GB for Admin, that makes 8 GB RAM consumed already. And as you start a Managed server which would make it 12 GB, which the VM does not have... Hence as soon as you start your Managed server the all RAM is consumed which makes your VM to hang.
As you can see Oracle is recommending 16 GB and that too it is without OAM server (which also you have installed on the same VM). So definitely you are constrained with your current 10.5 GB. Since your PC max is 12 GB, suggest you install only OIM on one VM on the current PC and OAM on a different VM on separate PC if possible. Yes Oracle IAM software is definitely a memory hog.
BTW, I have two suggestions for you, first if you want to install 11gR2 version then go for PS3 (11.1.2.3) or better go with 12c which is latest. 11.1.2.2 is considered old now. Here is link for PS3 download. And second consider Oracle's free downloadable Pre-built VMs here. Although the pre-built VMs will be on linux.
I am facing a CPU overload issue due to a nodejs application that I am running in a remote Ubuntu 16.04 LTS virtual machine. I am using PM2 to schedule my nodejs application as a service.
Initially when the nodejs application is launched, the CPU load remain quite low; about 30% at most. Then slowly I find the CPU load going up till it gets to 100%. This nodejs application is polling a stock website for new information on a stock and then does some calculations and then repeats after 5 minutes. I dont see how its causing this overload on the CPU.
I notice that my 1 nodejs application shows up as 6 different processes in HTOP command. Not sure if this normal or how to fix this. Any help would be highly appreciated.
Thanks
Regards,
Adeel
Thanks, Gerard. Your reply helped solve the problem. Turns out guardian.js was not exiting and just opening up new processes till it overloaded the system.
We are currently evaluating our next-generation company-wide developer pc-configuration and have noticed something really weird.
Our rather large monolith has - on our current configuration a build time of approx. 4.5 minutes (no test, just compile).
For our next generation configuration we upgraded several components. A moderate increase in frequency and IPC with the processor, doubling the number of CPU cores and a switch from a small SATA SSD towards a NVMe SSD rated at >3GBps. Also, the next generation configuration switches from Windows 7 to Windows 10.
When executing the first tests, we noticed an almost identical build time (4.3 Minutes), which was a lot less improvement than we expected.
During our experiments we tried at one point to run the build process from within a virtual Linux machine running on the windows host. On the old configuration (Windows7) we saw a drop in build times from 4.5 to ~3.7 Minutes, on the Windows 10 Host, we saw a decrease from 4.3 to 2.3 minutes. We have ruled out things like virus scan.
We were rather astonished with these results and have tried to find another explanation than some almost-religious and insulting statements about different operation systems.
So the question is: What could we have possibly done wrong in configuring the Windows machine such that the speed is almost half of a Linux running virtualized in the very same windows host? Especially as all the hardware advancements seem to be eaten up by the switch from windows 7 to 10.
Another question is: How can we ace the javac process use up more cores, because right now, using Hotspot JDK 8 we can see at most two cores really used by the build. I've read about sjavac but that seems a rather experimental feature only available to OpenJDK9 onward, right?
After almost a year in experimenting we came to the conclusion, that it is indeed NTFS which is the evil-doer. If you have a ntfs user-partition with a linux host, you get somewhat similar results compared to an all-windows-setup.
We did benchmarks of gradle-build, eclipse internal build, starting up wildfly and running database-centered tests on multiple devices. All our benchmarks showed consistently a speedup of at least 100% when switching from Windows to Linux (sometimes, Windows takes 3x the amount of time in real world benchmarks than Linux, some artificial benchmarks had a speedup of 60!). Especially on notebooks we experienced much less noise, as the combined processor load of a complete build is substantial less than with windows.
Our conclusion was, to switch from Windows to Linux over the course of the last year.
Regarding the parallelisation thing, we realized, it was some form of code-entanglement. Resolving this helped gradle and javac to parallelise the build a lot (also have a look into gradle-composite-builds)
I recently upgraded from Jenkins 1.6 to 2.5. After I did this, I noticed very high CPU usage, sometimes over 300% (there are only 4 cores, so I don't think it could go over 400%). I'm not sure where to begin debugging this, but here's a thread dump and some screenshots from top/htop
htop
top:
As it turned out, my issue was that several jobs had thousands of old builds. This was fine in Jenkins 1.6 but it's a problem in 2.5 (I guess maybe Jenkins tries to load all the builds into memory when you view the job overview page). To fix it, I just deleted most of the old builds from the problem jobs using this strategy and then reloaded jenkins. Worked like a charm!
I also set the "discard old builds" plugin to keep only the 50 most recent builds, to prevent this from happening again.
Whenever a request comes in, Jenkins will spawn some threads to serve the request. After upgrading Jenkins, it might have invoked at high throttle at that time. Plz check the CPU and memory usage of Jenkins server while the following scenarios :
Jenkins is idle and no other apps are running on the server.
Scheduled a build and no other apps are running on the server.
And compare the behaviors which could help you out to determine whether Jenkins or running jenkins in parallel with other apps are really making trouble.
As #vlp said, try to monitor the jenkins application via JVisualVM with Jstad configuration to hook in. Refer this link to Configure JvisualVM with Jstad.
I have noticed a couple of reasons for abnormal CPU usage with my Jenkins install on Windows 7 Ultimate.
I had recently upgraded from v2.138 to v2.140 plus added a few additional plugins. I started noticing a problem with the Jenkins java executable taking up to 60% of my CPU time every time a job would trigger. None of the jobs were CPU bound, just grabbing data from external servers, so it didn't make any sense. It was fixed with a simple restart of the Jenkins service. I assume the upgrade just didn't finish cleanly.
Java Garbage Collection was throwing errors and hogging the CPU when running with the default memory settings. It was probably overkill, but I went wild and upped the Java Heap Space for Jenkins from the default 256mb to 4gb; which solved this problem for me.See this solution for instructions:
https://stackoverflow.com/a/8122566/4479786
2.5 seems to be a development release, while 1.6 is their Long Term Support version. Thus it seems logical that you should expect some regressions when using the bleeding edge version. The bounty on this question is proof that other users are experiencing this as well. The solution is to report a bug on the Jenkins bug tracker. You can temporarily downgrade to the known good version for now.
Try passwing following argument to jenkins:
-Dhudson.util.AtomicFileWriter.DISABLE_FORCED_FLUSH=true
as mentioned here: https://issues.jenkins-ci.org/browse/JENKINS-52150
I am using pm2 in my app (ubuntu 14.04, 2cpu 4gb ram).
I want to load-test or stress test the load-balance between clusters (I hope i'm saying it right) for it's effectiveness, What is the best way to do so?
I am using latest node.js (0.12.7) and latest pm version, i have 2 clusters going for me, 1 for each cpu.
Now i need to check response time when my server is at it's limits, even to see when it crashes and why (It's a staging server so i don't mind)
I know 'siege', used it a bit, not the one i want, i want something that can push the server to it's limits...
Any suggestions?
You can try http://loadme.socialtalents.com
In case free tier will be not enough you can request quota increase.