Oracle IDAM installation | system requirement - oim

I have installed OIM [11GR2 PS2] and OAM [R2PS2] in my PC, but system hangs with 12Gb of RAM.
I have I3 5th generation processor along with 12 Gb of RAM.I use win10 as my basic OS; however for installing oracle product I use VM where I have installed win7[ultimate version ].
Though as per oracle pre-requisite chart, 8GB of RAM is enough to run single instance of OIM / OAM, however I have allocated almost 10.5 GB of RAM to those VM's running OIM / OAM, but each time, after admin server start, whenever I try to start any of the manage server, the CPU consumption reaches 100% and everything hangs, I had to shut down my VM.
Though the question is a basic one, but have not found exact answer anywhere. Looking for help/suggestion .

The memory requirement of 8 GB is bare minimum and 16 GB is recommended. See this 11gR2 memory requirements and 11gR2 requirements. Also Refer to 3.1 Minimum Memory Requirements for Oracle Identity and Access Management and the section 3.3 Examples: Determining Memory Requirements for an Oracle Identity and Access Management Production Environment. (Even though it is mentioned Production but is valid for your instance since you have one VM, which is hosting all the components, inlcuding WebLogic server, OIM server, SOA server and also OAM server.
Here is the estimate of RAM from the above Oracle 11gR2 reference
To estimate the suggested memory requirements, you could use the following formula:
4 GB for the operating system and other software
+ 4 GB for the Administration Server
+ 8 GB for the two Managed Servers (OIM Server and SOA Server)
-----------------------------------------------------------
16 GB
With 4 GB for OS and 4 GB for Admin, that makes 8 GB RAM consumed already. And as you start a Managed server which would make it 12 GB, which the VM does not have... Hence as soon as you start your Managed server the all RAM is consumed which makes your VM to hang.
As you can see Oracle is recommending 16 GB and that too it is without OAM server (which also you have installed on the same VM). So definitely you are constrained with your current 10.5 GB. Since your PC max is 12 GB, suggest you install only OIM on one VM on the current PC and OAM on a different VM on separate PC if possible. Yes Oracle IAM software is definitely a memory hog.
BTW, I have two suggestions for you, first if you want to install 11gR2 version then go for PS3 (11.1.2.3) or better go with 12c which is latest. 11.1.2.2 is considered old now. Here is link for PS3 download. And second consider Oracle's free downloadable Pre-built VMs here. Although the pre-built VMs will be on linux.

Related

Migrating an On Premise solution to Azure

I have an on premise solution that consists of a server (1 machine) and X users (each in 1 machine). All the users are using the same Win32 application. The question is: How do I translate this in to an Azure enviroment? Each of the users machines are using 4 CPUs and 8 GB of RAM (this is necessary).
Do I have to configure a new machine which has to have the 4 CPUs and 8GB per user, or is there a more efficient way to get this done? Because otherwhise this is not economically profitable.
I was thinking about using XenApp and only one VM for all the users to solve this problem. But I'm not quite sure.
Any help is welcome.
you could automate the provisioning of the VMs with ARM templates. Your virtualizationception could also work. But then: how do you garantee the 8GB of RAM that is necessary? Would be good to have some more details about the requirements.

Error-Openstack Installation Cinder API did not Start

I tried installing devstack OpenStack-liberty on Ubuntu 14.04 Using VM Virtualbox. I want to integrate nova,swift,cinder along with OpenStack. I have enabled services for cinder in localrc file. After trying so many times i.e stacking(run ./stack.sh) and unstacking ,I ended up getting the same error:
'c-api did not start'
The problem was due to resources. I was using 4GB RAM which was not sufficient. Few APIs consume more RAM while starting. Openstack Installation console waits for a while and expects APIs to start, which wasn't happening in my case due to less dedicated RAM since I was using 4GB of RAM in which only 2.5GB RAM was given to my VM.
After struggling few days I got to know the issue and upgraded my system's RAM to 8GB, and it worked!
So I suggest people who want to work with Openstack-Swift and Openstack-Neutron should dedicate minimum of 5.5GB to VM!

Cassandra SSD / VHDX

I'm about to create my first Cassandra cluster, starting from the first node :) But immediately I've ran into dilemma on how to implement drives to be used, so any word of advice will be appreciated. Here are musts:
The node must run as Hyper-V (Win srv 2012 R2) VM
I have 2 SSD 256 GB drives available for it
Preferably Ubuntu 14.04 guest OS
My options:
Create dynamic stripped drive (basically software RAID0) in host OS (Win srv), and then create VHDX on top of it that will be used by guest Ubuntu;
No RAID, simply create two VHDX (one per SSD drive), and create guest Ubuntu that uses both VHDXs. Later within Ubuntu use one of the drives for logs and another for SSTables;
Do not create VHDX but connect (passthrough) physical SSDs to the newly created Ubuntu guest, then software RAID0 both SSD during Ubuntu setup process;
Similar with the previous but without software RAID0, and with assigning one drive for logs and other for SSTables
Which of the previous configuration would satisfy Cassandra best? Any resource (experience) about the differences in performances?
It is also important to know that the following is NOT important:
SSD life time - If SSD will survive a year or 10 years is not important at all.
Fault tolerance - I'm not afraid of zero fault tolerance of RAID0 configuration. Fault tolerance of the system will be achieved by using multiple nodes and the appropriate replication policy, so failure of one node is not important.
Also, I'll say that I would be happiest with the first option since it allows me to use my existing VM snapshot-based backup infrastructure, and maybe even add another VHDX on the same RAID0 that will be used by another, non-IO intensive VM.
Finally, when it comes to VHDX on top of SSD - dynamically expanding or fixed?
Many thanks!
I've forgot to say (not sure if important but...):
The cluster should be write-optimized. Expected ingesting rate is 50,000 data points per second. Rare reads - probably no more than one per second.

Microsoft Azure VM high network latency

We newly setup Microsoft Azure 1core VM (Region SouthEastAsia) running a website. When we try to access home page from India, it loads after a considerable delay of 3sec to 7sec when there was no load on the server. The delay is consistent, and observed for other pages also.
firebug screen shows waiting time around 1600 ms
Q1: Is this a known network latency issue with Azure ?, if no How can we reduce the latency?
Q2: What is the Microsoft KPI for network latency ?
Environment:
Azure VM: Basic_A1 (1 core, 1.75 GB memory)
Application: LAMP stack (Apache1,php,Mysql)
OS: Ubuntu 14.04 LTS
Region: SouthEastAsia
Accessed from: India
The response size of the home page: 1.4 MB.
Observations:
1. Not a processing delay: We did a wget http://localhost/ on the server, and the page download was done within 700ms
Previously we had same setup with other vendor, the page load took less than 1sec
azure 2 core machine also has the same latency.
I suggest you to first log into an Azure VM and see if you do notice a time latency to the Website. Also you could check from a different ISP and see if the issue persists.
You should try two things:
Check Your site with http://tools.pingdom.com/fpt/ and give us an output (may be as image).
Check performance both with azure host name and Your own host name and compare results.

Azure compute power: Extra Large VM slow

Can anyone offer me any insights into why my cloud deployment would be slower than an on-premises computer in "horsepower" terms?
I have a compute intensive application which uses a worker role to carry out millions of computations (in parallel).
Currently in Azure I'm testing using an Extra Large (8 core, 16GB) VM to do the processing. On average it's taking 45 minutes per iteration whereas the same code running on a 4 core, 8GB on-premises machine was taking only 15 minutes.
Azure logs indicate total processor utilisation is 99% but I have 12GB memory free so I'll definitely try loading more data into memory for each iteration.
Are the 8 cores just individually very low spec? Is local storage really local? That is, is local storage really on a different physical device and therefore fetching data from file and writing results to disk is slow?
Scott Guthrie (main at Windows Azure team) to me
Hi Ivan,
We have other VM HW configurations as well – including multi-proc and high memory options. You’ll see even more options in the future.
Hope this helps,
Scott
My test: (100% of processor time)
Lucas-Lehmer math calculations. Multithread version uses Parallel.For implementation
Home computer Core i7 3770K (4 cores x 3.5GHz) (Win 8)
SINGLETHREADED (17 primary numbers): 11676 ms (11.6 secs.)
MULTITHREADED (17 primary numbers): 2816 ms (2.8 secs.)
Azure Large VM (4 cores x 1.6 GHZ) (Win S 2008)
SINGLETHREADED (17 primary numbers): 37275 ms
MULTITHREADED 17 primary numbers): 10118 ms
Azure Extra Large VM (8 cores x 1.6 GHZ) (Win S 2008)
SINGLETHREADED (17 primary numbers): 36232 ms
MULTITHREADED (17 primary numbers): 6498 m
Work computer - AMD FX 6100 (6 cores x 3.3 Ghz) (Win 7 w upd)
SINGLETHREADED (17 primary numbers): 48758 ms
MULTITHREADED (17 primary numbers): 16486 ms
Vote for this idea on first page http://www.mygreatwindowsazureidea.com/forums/34192-windows-azure-feature-voting/suggestions/3622286-upgrade-windows-azure-processor-from-1-6-ghz-to-mi
I am experiencing the same issue. My web app with the database (on sql azure) is also really slow compared to my on-premise computer.
Local server details:
- dell's entry level server < $1000, with 4 cores and 8GB memory.
- Server is running as VMs
- even DB server is on the same server (sharing same hardware with the web server)
Azure:
- Webrole on Extra large server with 8 cores.
- SQL Azure (I guess on the different physical server)
My expectation was that it will improve the performance when I deploy to azure! :(
Guess what, it is 4 times slower (verified using the profiler code that times every request)
I am disappointed, I think it is really slow 8 cores.
I ran the test on my old computer (Intel Pentium). Installed the same local VMs on that (VMWare host). It is even faster than azure.
Couple questions in here, I'll try to answer some...
Local storage is local - means on the same disk, in a restricted area. Are you using the local storage APIs to access it? Local storage is also disposable - if your app is redeployed, all data in local storage is lost. If you are using an Azure Drive, then yes I would expect some delays since this writes to blob storage but you haven't mentioned that.
CPU spec is defined on the Azure website.
It is difficult to solve your actual slowness problem though without getting a better idea of the architecture and process your background work is following. But as a general rule, I would be surprised to see the results you are indicating. (Is your on prem machine a VM or dedicated hardware?)
I find the same thing when running analytics-heavy code (ie. little disk usage, not too much RAM needed). I guess the problem is that they select CPUs based on price and number of cores rather than power. The theory is that you should be parallelizing your code to take advantage of all those cores, but sometimes that's hard or expensive (in coding time). Consider voting for more CPU power, but sometimes that's hard or expensive.

Resources