We are trying to insert about 100k users in Liferay. Is there a way to have this all updated in one batch commit, instead of making separate calls to add each user?
It think yes it's possible.
Build a custom remote service entity like BulkUserServiceUtil.addUsers, within it call the standard method UserLocalServiceUtil.addUser for each user.
Returning from the BulkUserServiceUtil method the transaction is committed.
#sandeep:
Yes, Liferay not provide us to add/update bulk users, because after user creation some table affected in that and also user indexed, but if you want to do that I have two suggestions :
Take a reference of REINDEX option of articles: in that case you can create a batch of counter range with some sort of value and update/add that batch, but the thing is Liferay internally call addUser default. So its iterative way which you can use.
Without Service : Create some custom script and directly hit the DB for once. which create users but in that case you have to take care of other liferay tables which need to be insert the userID or respective data.
Related
I have a field on one of my base page types which I need to update programmatically from an external data feed, so that it can be included in my Smart Search index.
The documents are versioned, but I want to update the published value of this field regardless of checkout state, and obviously avoid any sort of overwrite when documents are checked in.
The field will not appear on the editor form -- or ideally, would conditionally display for Global Admins.
It appears that using the API to update the document without doing a CheckOut fails silently. However if I do a Checkout/Update/CheckIn on a checkout-out page, the author will lose their work I assume?
Any way to handle this "versionless" field via the Kentico data model and API?
I don't think there is a way around updating checked out pages. You can update the page type table directly, but as you mentioned, it will be overwritten when they check in. You could update the version history I believe to make changes to the current data that is checked out, but again, I think that will be lost if the user cancels.
The only way I can think of to solve your issue is to create another table that maps the values you want to the page. Then you don't have to worry about the pages being checked out, you just need to grab the documentID or something. Since the value isn't displayed to the editor, you just have a field that does a lookup on this table.
The preferred and right way is using the API but as you stated, it causes problems if a user has something already checked out and working on it or it's in workflow and not published yet.
If the field you're updating is page type specific, there is one thing specifically I can think of and that's going directly to the database to the page type's database field and perform an update to that field.
Note: this is not recommended unless you know specifically what you're doing and have done full testing on it
The down side of going direct to the database is this will not update the current version since you're using check in/out and workflow. You will also need to update the checked out and current version which means you need to:
Go to the Document itself in the cms_documents table and get the document you are working with.
Then using the fields DocumentCheckedOutVersionHistoryID and DocumentPublishedVersionHistoryID' you can get the version history IDs of the document from theCMS_VersionHistory` table.
Then you can perform an update to the CMS_VersionHistory and your custom page type fields.
You will then need to look in the CMS_WorkflowHistory table and find out if that document is in workflow and in what step.
After you have that workflow history step, use the VersionHistoryID field to go back to the CMS_VersionHistory table and update that record with your data.
Again, not an elegant solution since you are using check in/out and workflow but after some trial and error and testing you should be able to figure it out.
UPDATE
You may also be able to add a custom table or some other linked database table which will allow you to create a global handler. The linked table would be where you perform your updates via API and other calls without versioning or workflow. Then when a user updates a specific page type you could do a check to see when the last time that linked table was updated and update the field(s) you need on update of that particular page (of course by node and document IDs).
Unfortunately you'll have to check it in and out with API. See examples here.
Also you might need to publish it in order to reflect changes on the live site.
I'm using Liferay 6.2.
My need is to add one user in LR, with a specific userID.
Or alternately, update a userID with another value.
The standard addUser service does not provide the possibility to specify the userID, and even the updateUser.
I would like to understand how LR choose the IDs for a new user, and if I can modify it.
thanks!
Like in almost any database driven application they're assigned in sequence. And no, you don't have to choose anything, it'll be taken care of by the framework. It needs to be unique, you can't add another user with the same ID and you must be sure that the user with this id will never be created in future. Thus: If you'd use an id that has already been given out, you'd have a duplicate. If you'd use one that has not yet been given out, you'll have a duplicate some time in future, when the sequence of ids comes to this value and the framework assigns the same id for the second time.
If you have an architecture that relies on a specific ID, your architecture is wrong. Rethink the problem and change the architecture and whatever you've already done to implement it.
LR core services use a CounterService to automatically assign UserId (and plugin developer should do the same)... so all the generating code is properly wrapper in the service methods that creates a number of rows in different tables when creating a user.
I agree with previous comment "If you have an architecture that relies on a specific ID, your architecture is wrong"... by the way you can use a tip.
Do you know Expando in LR? In enables you to add virtual columns on a DB entity... by using it, you can create a virtual column "myExternalId" to table "User_" (entity "User") and store there the ID you need. Then modify your code to use the field myExternalId instead userId.
What I need
I have a custom Entity with that with multiple fields. Admin Role has "god" access. All other roles except for one have read only. The one non admin role with update access, should only be able to update a single field.
What I believe to be true
I believe I have three main options to implement this requirement:
Enable Update Access to the role for that entity then write Javascript to disable all fields on the form for that role, except for the one that I want that role to be able to edit
Enable Update Access to the role for that entity then create a new form that disables all fields on the form for that role, except for the one that I want that role to be able to edit.
Enable Update Access to the role for that entity then turn on field security for each field, disabling access using the field security, for each field except for the one I want them to edit.
What's the Best Practice?
What options should I choose?
If I go with options 1 or 2, will the user be able to edit the field on the bulk edit form?
From a user perspective, I think it's confusing when a form opens up with things enabled, then they get locked down. Plus someone could possibly get data in there before the fields get locked. I'd say you'd have to combine this with a plugin to prevent changing fields you don't want changed.
I like this option better, although again, the field can be unlocked if someone knows what they're doing, so a plugin to double check would be nice.
This would avoid having to double check in a plugin, but you also have to rely on the admin correctly setting up security for new fields going forward. If that's not a concern, this might be best.
Bulk edit is a global privilege, so they'd have bulk edit for all entities. Also, the bulk edit form does not load scripts, so that knocks out option 1. I'd say if it's just this one field, I might leave the privilege locked down and provide my own Bulk edit button on the grid that would show a custom page that just has that one field on it, then handle the updates though script.
2 is most likely best, or as an alternative put the fields in the header or footer rather than as read-only fields on the form.
This also means the fields won't be available to bulk edit, but other methods such as data import or workflows would let users get round this if they know how and have rights to do such things.
3 Field Security is the most robust and works for all scenarios
Possible option 4: create another entity to contain those fields and apply different security to that entity. If created as a child, show the record in a grid on the form with the values included in the view. If it is a parent then you could use methods such as showing the values via an HTML webresource page included on the form.
I'm wondering what strategies people are using to handle the creation and editing of an entity in a master-detail setup. (Our app is an internet-enabled desktop app.)
Here's how we currently handle this: a form is created in a popup for the entity that needs to be edited, which we give a copy of the object. When the user clicks the "Cancel" button, we close the window and ignore the object completely. When the user clicks the "OK" button, the master view is notified and receives the edited entity. It then copies the properties of the modified entity into the original entity using originalEntity.copyFrom(modifiedEntity). In case we want to create a new entity, we pass an empty entity to the popup which the user can then edit as if it was an existing entity. The master view needs to decide whether to "insert" or "update" the entities it receives into the collection it manages.
I have some questions and observations on the above workflow:
who should handle the creation of the copy of the entity? (master or detail)
we use copyFrom() to prevent having to replace entities in a collection which could cause references to break. Is there a better way to do this? (implementing copyFrom() can be tricky)
new entities receive an id of -1 (which the server tier/hibernate uses to differentiate between an insert or an update). This could potentially cause problems when looking up (cached) entities by id before they are saved. Should we use a temporary unique id for each new entity instead?
Can anyone share tips & tricks or experiences? Thanks!
Edit: I know there is no absolute wrong or right answer to this question, so I'm just looking for people to share thoughts and pros/cons on the way they handle master/details situations.
There are a number of ways you could alter this approach. Keep in mind that no solution can really be "wrong" per se. It all depends on the details of your situation. Here's one way to skin the cat.
who should handle the creation of the copy of the entity? (master or detail)
I see the master as an in-memory list representation of a subset of persisted entities. I would allow the master to handle any changes to its list. The list itself could be a custom collection. Use an ItemChanged event to fire a notification to the master that an item has been updated and needs to be persisted. Fire a NewItem event to notify the master of an insert.
we use copyFrom() to prevent having to replace entities in a collection which could cause references to break. Is there a better way to do this? (implementing copyFrom() can be tricky)
Instead of using copyFrom(), I would pass the existing reference to the details popup. If you're using an enumerable collection to store the master list, you can pass the object returned from list[index] to the details window. The reference itself will be altered so there's no need to use any kind of Replace method on the list. When OK is pressed, fire that ItemChanged event. You can even pass the index so it knows which object to update.
new entities receive an id of -1 (which the server tier/hibernate uses to differentiate between an insert or an update). This could potentially cause problems when looking up (cached) entities by id before they are saved. Should we use a temporary unique id for each new entity instead?
Are changes not immediately persisted? Use a Hibernate Session with the Unit of Work pattern to determine what's being inserted and what's being updated. There are more examples of Unit of Work out there. You might have to check out some blog posts by the .NET community if there's not much on the Java end. The concept is the same animal either way.
Hope this helps!
The CSLA library can help with this situation a lot.
However, if you want to self implement :
You have a master object, the master object contains a list of child objects.
The detail form can edit a child object directly. Since everything is reference types, the master object is automatically updated.
The issue is knowing that the master object is dirty, and therefore should be persisted to your database or whatnot.
CSLA handles this with an IsDirty() property. In the master object you would query each child object to see if it is dirty, and if so persist everything (as well as tracking if the master object itself is dirty)
You can also handle this is the INotifyPropertyChanged interface.
As for some of your other questions :
You want to separate your logic. The entity can handle storage of its own properties, and integrity rules for itself, but logic for how different object interact with each other should be separate. Look into patterns such as MVC or MVP.
In this case, creation of a new child object should either be in the master object, or should be in a separate business logic object that creates the child and then adds it to the parent.
For IDs, using GUIDs as the ID can save you quite a bit of problems, because then you don't have to talk to the database to determine a correct ID. You can keep a flag on the object for if it is new or not (and therefore should be inserted or updated).
Again, CSLA handles all of this for you, but does have quite a bit of overhead.
regarding undo on cancel : CSLA has n-level undo implemented, but if you are trying to do it by hand, I would either use your CopyFrom function, or refresh the object's data from the persistance layer on cancel (re-fetch).
i just implemented such a model.but not using NH, i am using my own code to persist objects in Oracle Db.
i have used the master detail concept in the same web form.
like i have master entity grid and on detail action command i open a penal just below the clicked master record row.
On Detail Add mode, i just populate an empty entity whose id were generated in negative numbers by a static field.and on Save Detail button i saved that entity in the details list of the Master Record in Asp.NET Session.
On Detail Edit,View i populated the Detail Panel with selected Detail through ajax calls using Jquery and appended that penal just below the clicked row.
On Save Button i persisted the Master Session (containing list of Details) in database.
and i worked good for me as if multiple details a master need to fill.
also if you like you can use Jquery Modal to Popup that Panel instead of appending below the row.
Hope it helps :)
Thanks,
Is there a standard way to associate custom properties with a user? I need to store the number of items per page a user selected in a grid of a document library separately for each user and document library.
Edit:
Sorry about this vagueness, I wanted to do it programmatically. It seems like I've found the solution, it is UserProfileManager class, though I'm now looking into whether there is a limitation on the number of properties you can save this way for a user, because the easiest way of saving page sizes on per user+document library basis seems to be using GUIDs of Views as property names and numbers of pages as values. Though I don't know if it is more efficient or not, depends on how sharepoint stores these properties.
No, you would need to create custom code to store the data.
Given the potential amount of data created, it may be wise to store it in a separate database.
This would give greater flexibility in the way the data can be manipulated and retrieved.
Your question is a bit vague. Are you looking to do any custom code? You could do this many ways so it is difficult without knowing more of what you want.
Using custom code you could set up a workflow or event handler to respond to item events and record the information and store it using the User's Profile or as an SPPersistedObject.
If you want a less developer centric way to do it you can use auditing and simply do reporting on your audit results.
You could set up a list to store the selection data, then use events/AJAX on the document list to push tick/untick items into the selection list (store user, library and document as a minimum.
If you don't want a separate list, you could create a field in your document library that stores which users have a given document tagged... You'll still need some kind of event/AJAX to update the list when a user ticks/unticks the box. Crude :)