Oracle Identity Manager -- Data Refresh from PROD - oim

I am novice in the OIM field and ran into issues. It would be really helpful if someone helps me out in this context.
OIM currently used for centrally manage authentication and authorization and it has quiet a many custom field in USR table currently in PROD. On top of that, current enhancement we are doing also required additional field in USR table which we are adding through USR FORM changes. The new changes are not in LIVE yet.
The requirement is to do a data refresh from PROD to the environment where UAT would be performed. Since we are not changing the servers where OIM used and only doing DB refresh from PROD, what is the best way to do this activity.
Do we need to do Full DB refresh or partial refresh (only schema oim)?
Thanks in advance!!

There is a document from Oracle on how to perform OIM DB backup/restore.
OIM 11gR2: Schema Backup and Restoration using Data Pump Client Utility (Doc ID 1492129.1)
The short resume of the document is that you have to take care of several schemas (OIM, SOAINFRA, MDS, ORASDPM, OPSS). There are some dances around while doing backup/restoration but i don't remember the details. You will figure that out in the process, if you don't have access to the document.

Related

Couchdb add user database configuration

I want to use Couchdb to create a offline first app, where users can add documents.
Only the user who created a document should be able to change it, otherwise it should only be readable. For this i wanted to use the "peruser" mechanism of couchdb and replicate these documents into a main database where everyone can read.
Is it possible to automatically get the replication and other configurations (like design documents) configured when the database is created by the couch_peruser options?
I found a possible way myself:
add a validation function to the main database to deny writes (http://docs.couchdb.org/en/2.1.1/ddocs/ddocs.html#vdufun)
use _db_updates endpoint to monitor database creation (http://docs.couchdb.org/en/2.1.1/api/server/common.html#db-updates)
create a _replicator document to set up a continuous replication from userdb to main db (http://docs.couchdb.org/en/2.1.1/replication/replicator.html)
One thing to look about is that maintaining a lot of continuous replications requires a lot of system resources.
Another way is to create authorships with design documents. With this aproach we don't need to maintain replications to the main database, because every entry can be hold in one database (main database in my case).
http://guide.couchdb.org/draft/validation.html#authorship

How can I clear my local database using azure mobile services?

I'm using Azure Mobile Services and I want to clear local database, how can I do that?
I have a problem with my local database. When I logout in app and login with other user, the data of the previous user is loaded for current user and I don't have idea why this occurs. I use debug on server side and the server return correct data, then I believe that the problem is the local Database.
I'm using Azure Mobile Services and I want to clear local database, how can I do that?
For deleting your SQLite file, you could follow Deleting the backing store. Also, you could leverage the capability provided by IMobileServiceSyncTable to purge records under your offline cache, details you could follow Purging Records from the Offline Cache.
When I logout in app and login with other user, the data of the previous user is loaded for current user and I don't have idea why this occurs. I use debug on server side and the server return correct data
Since you did not provide details about your implementations (e.g. user log in/log out, user data management,etc), I would recommend you check whether your server/client side both enable per-user data store. You could use fiddler to capture the network traces when other user logging in, and make sure that the correctly user identifier (e.g. UserId) is returned, then check the query against your local database. Moreover, I would recommend you follow adrian hall's book about Data Projection and Queries.
You can delete all of the local DB files by doing the following.
var dbFiles = Directory.GetFiles(MobileServiceClient.DefaultDatabasePath, "*.db");
foreach (var db in dbFiles)
{
File.Delete(db);
}
However, this would delete all data each time and cause performance issues, as every time after you did this, you'd be getting a fresh copy of the data from your Azure DB, rather than using the cached copy in the device's SQLite DB.
We typically only use this for debugging, and the reason it's in a foreach is to capture all databases created (refer to my last suggestion)
There are a few other things you could try to get around your core issue of data cross-over.
There's another reason you might be seeing this behaviour. With your PullAsync, are you passing it a query ID? Your PullAsync line should look similar to this.
GetAllFoo(string userId)
{
return await fooTable.PullAsync("allFoo"+userId,fooTable.Where(f=>f.userId == userId));
}
Note that the query ID will be unique each time (or at least, for each user). This is used primarilly by the offline sync portion of Azure, but in combination with the Where statement (be sure to import System.Linq), this should ensure only the correct data is brought back.
You can find more information about this here.
Also, some things you may want to consider, store a separate database for each userId. We're doing this for our app (With a company ID) - so that each database is separate. If you do this, and use the correct database on logging in, there's no chance of any data cross over.

per user db (pouchdb/couchdb) & shared data - doable?

I have the following use case / application:
a TODO app where users can: CRUD their on TODOs (I am using pouchdb/couchdb syncing). Pretty much based on Josh Morony's tutorial
Now I want to add ability of users to "share" (post only, there is no "edit"/put) their TODO items with other users, who would be able to just view (read) those (no write access etc).
I was thinking about adding a separate DB (let's call it "shared TODOs DB") where my server can write and all users can only read.
So any user could potentially do .find() across that read only db, while posting in there will still be governed by a server upon requests to share their TODO coming from users.
Is there a known pattern (approach) for this? Are there any real apps / examples that already do that?
CouchDB does not offer a good way to do this, but if your backend has a facility for maintaining lots of shared databases (in addition to single-user ones) I think you could pull it off.
The initial impulse is to use continuous filtered replication to/from a central "master" hub, but this leaves you with two problems:
Replication can't delete documents unless they are deleted! I.e. filtered replication doesn't determine which documents exist in the target, but rather whether a document's changes will propagate to the target. The effect of this is that you can share, but you can't unshare.
This sort of replication doesn't scale. For N user databases you need to keep in sync, every change to the central database forces all N replications to process that change independently. Now consider that the amount of changes M that happen on the central database will be almost directly proportional to N (i.e. the more users you have the more frequently all those users will have to process changes to the central database!) You could somewhat mitigate this by adding more layers (fanning out from one central hub to semi-shared hub to individual databases), but that also adds practical complications.
How is your sharing organized? What you might be able to do is set up a shared database for each "share" combination.
When user A wants to share document D with user B, "move" the document into a new database AB. Practically this means: copy the contents of D to a new document D' in database AB, and then delete the original D from database A.
How does this sync?
Keep in mind PouchDB clients can replicate to/from more than one source database, so user A will replicate from A and AB, while user B replicates from B and AB. The trick is to use filtered replication in the opposite direction, back to the server databases:
The now-shared document D' should never go to database A or database B, and likewise an unshared document X should never go to database AB. Something like this:
┌───────────────────────┐
│ server:5984/user_a │
└───┬───────────────▲───┘
└─┐ ┌─┘ ┌──────────────────────────────┐
│ ●──────│ if (doc.shared_with == null) │
┌─▼───────────┴─┐ └──────────────────────────────┘
│ local │
└──▲──────────┬─┘
┌─┘ └─┐ ┌──────────────────────────────┐
│ ●──────│ if (doc.shared_with == 'ab') │
┌───┴──────────────▼────┐ └──────────────────────────────┘
│ server:5984/shares_ab │
└───────────────────────┘
So, assuming the share database(s) are already set up, when the local client wants to share some data it actually adds _deleted:true to the original (unshared) document and creates a new (shared) document. The deleted original propagates to the user_a database on the server and the created copy propagates to the shares_ab.
Unsharing then works pretty much the same: the client adds _deleted:true to the shared document and recreates the data again in a new unshared copy. From B's perspective, the document had showed up once it was in shares_ab and now disappears because it is deleted from shares_ab.
(Instead of user "pairs" you could extend this to ad-hoc sets of users, or simplify it to specific groups that users are already in, or whatnot. The key idea is to create an actually shared database for each unique sharing context needed.)
So for this kind of project, per-user database is the recommended pattern for your databases.
Idea
So basically, you will have :
One private database per user (with write/read permissions)
One central database read-only
As for the central database, you need it to be read-only and you also need to allow shared documents only. You need some kind of application proxy for this. You can basically build an API on top of the central database and allow access to shared documents only.
Then, you can setup replications from each user database to the central database and persist de replication in the _replicator database.
couchperuser
I'm not sure that per-user database plugin is working at the moment with the version 2.X.X but you can do it yourself with some sort of application process (Create the user, then create the database, then manage permissions of the new database)

Best way to get data from restricted database with XPages

I'm working on two Domino databases that contain XPages :
the 1st database is a public database,
the 2nd one is restricted to a group (let's say the HR team)
I'm building an XPage in the public DB and I need to populate a sessionScope variable with the data of the HR's database (for example the HR id of the user)
So, as the normal users will not have access to the HR DB, a #Dblookup is not allowed.
Using sessionAsSigner method needs to re-sign all elements of the db each time a developer is modifying a XPages component (otherwise the sessionAsSigner element is unknown).
Then, how to query a database that I do normally not have access ?
Do I have to call an agent with higher access than the connected users ?
And then, how to populate the sessionScope variable ?
Any help will be greatly appreciated
There are a few options, but as Knut says, without a shadow of a doubt, the best practice approach is to use sessionAsSigner.
Source control can be used to allow multiple developers to work on their own instance of the design. Swiper can be used to suppress signatures from the source control repository to minimise conflicts.
All other options I can think of (e.g. periodic exports, using a runOnServer agent) will take longer to code, be more complex and will require you, as the developer, to manage the security implications of exposing the data.

CouchDB simple document design: need feedback

I am in the process of designing document storage for CouchDB and would really appreciate some feedback. These documents are to represent "assets".
These databases will also be synced locally to the browser via pouchdb.
Requirements:
Each user can have many assets
Users can share assets with others by providing them with a URI such as (xyz.com/some_id). Once users click this URI, they are considered to have been "joined" and are now part of a group.
Group users can share assets of their own with other members of the group.
My design
Each user will have his/her own database to store assets - let's call it "user". Each user DB will be prefixed with the his/her unique ID.
Shared assets will be stored in a separate database - let's call it "group". shared assets are DUPLICATED here and have an additional field for userId (to indicate creator).
Group database is prefixed with a unique ID just like a user database is prefixed with one too.
The reason for storing group assets in a separate database is because when pouchdb runs locally, it only knows about the current user and his/her shared assets. It does not know about other users and will should not query these "other" users' databases.
Any input would be GREATLY appreciated.
Seems like a great design. Another alternative would be to just have one database per group ("role"), and then replicate from a user's group(s) into their local PouchDB.
That might get hairy, though, when it comes time to replicate back to the server, because you're going to have to filter the documents as they leave the user's local database, depending on which group-database they belong to. Still, you're going to have to do that on the server side anyway with your current design.
Either way is fine, honestly. The only downside of your current approach is that documents are duplicated on the server side (once per user-db and once per group-db). On the other hand, your client code becomes dead-simple, because you don't have to do any filtered replication. If you have enough space on your server not to worry about it, then I would definitely go with your approach. :)

Resources