I have Liferay 6.2.
I have a portlet that use MySql database.
I have a table persons:
Id | Name | Info
Id is auto-increment, so in service.xml i have:
<column name="Id" type="long" primary="true" id-type="increment" />
I have a development machine.
I have 100 rows already available in the mysql table persons.
I entered information within the table persons via a sql query and I used the id starting from 300 up to 600.
I used a Liferay backend tool to update the database cache. When I enter a new row with the application (portlet), the id is 601. It's correct.
I have a production machine, I performed the same operation except for emptying the database cache because for some reason I can not do it.
When I insert a new row the with the portlet id is 101 and not 601.
When will I come to ** id ** 199 what will happen when I insert a new line?
What can I do to solve the problem?
Given that you see the ids on "hundreds", you might just see the default increment value for Liferay entities:
From portal.properties:
# Set the number of increments between database updates to the Counter table.
# this value to a higher number for better performance.
# Defaults:
counter.increment=100
You can test for this behavior by either creating 101 objects (where object #101 wou then get 700 as an id) or by shutting down the server that you used to create the #101 object and restart, then create a new object: The granularity of 100 would mean that the next 100 ids will be allocated and #700 will be picked next.
When will I come to ** id ** 199 what will happen when I insert a new line?
It's easy to find out...
What can I do to solve the problem?
Validate that there is a problem. There might be no need to solve anything
I have a development machine. I have 100 rows already available in the mysql table persons. I entered information within the table persons via a sql query and I used the id starting from 300 up to 600
It's a quite bad idea to do so when you otherwise rely on automatic id construction. Liferay's Service Builder can't rely on id generation in the database, as every database behaves slightly different in that regard.
Liferay's id generation - when it follows the mechanism that I've described above - is done through CounterLocalService. You can make calls to this service to advance the Ids for your entity (use your entity's class name as key) in case you do something manually to your database.
You should never change any Liferay entity on the database directly. If you have your own servicebuilder-generated entities, you might have a point, but my recommendation would be to choose one method of writing/accessing the data and stick with it.
Related
I am facing an issue with extended attributes on a document (trying to extend a document table). I created the original table’s PK (FDOC_NBR) in the extended table and linked the two via a foreignKey of the customized original table’s ojb entry (as an “extension” reference-descriptor). I created the bo and dd for the extension and customized the original document’s dd to add the new attributes. On the extended BO itself I also added members (with setters and getters) for the 2 new columns + for the PK column of documentNumber. I also added the new attributes to the documet’s jsp. The pertinent module definition was already extended to include custom dd, ojb, etc. files.
Indeed, when opening the document the new fields are shown- however, when trying to submit the document (regardless of doing anything with the new fields) I get an error-
Error Details: OJB operation; SQL []; ORA-01400: cannot insert NULL
into ("KFSTEM"."TEM_TRVL_ARRANGER_DOC_EXT_T"."FDOC_NBR") ; nested
exception is java.sql.SQLIntegrityConstraintViolationException:
ORA-01400: cannot insert NULL into
("KFSTEM"."TEM_TRVL_ARRANGER_DOC_EXT_T"."FDOC_NBR")
Seems like somehow the system tries to insert a value of NULL into the extension’s PK field, instead of the actual document number. Trying to debug this, in the action’s route method and all the way down to DocumentDaoOjb.save (which is as far down as I can go) I see the document with the real doc number is passed on, so the problem seems to be purely with ojb trying to set this number to the extension table.
Does anyone have any experience with extended attributes on documents that could help shed some light on this?
KFS is using the KNS, and in the Kuali Nervous System, the primary key on the extended attributes object must be set through manual intervention.
In this case, it looks as if you're adding an extended attribute to a transactional document, the Travel Arranger document (TAA), which simplifies things. Basically, you'll need to extend org.kuali.kfs.module.tem.document.TravelArrangerDocument and override prepareForSave to set the document number there (it may be set already since prepareForSave should be called several times during the routing process, but there's no real harm from overwriting that information as the base document's number will remain the same).
Hope this helps!
With Entity Framework it is possible to enable migrations and create migration steps. But is there an intermediate way where it is possible to change the models, and take care of database schema changes yourself? I don't want to drop the database, because there are future production schenario's.
Now - without enable migrations - I use a code first, and when I create another property in a DbSet - lets assume for example in table 'ExistingTable' int NewField {get; set;}
And when in SQL I update my schema with
Alter table ExistingTable add column NewField int not null
the database knows existence of the new field, the Entity Framework / C# knows the property, but when running, there is some hidden check that still want's to drop my database because of the model change.
Question: can I overwrite a certain setting, in such a way that intial 'Code First' can be transformed to database first?
Removing the __MigrationHistory table from the database (Azure) did work fine for me. I made my (simple) database changes myself and published the code. It all runs fine. There is an alternative see EF Code First Migrations Deployment to an Azure Cloud Service. For a simple one-way patch (and no change history needed) removing the __MigrationHistory works fine.
I have configured DataImportScheduler in Solr which hits a URL specified in the params attribute in properties file, can it handle the delta imports without making a change to the database schema.
DataImportScheduler's sole purpose is to simply fire HTTP Post command with params and interval specified in its properties file to allow easy scheduling on Windows servers (where there are no cron jobs). It has nothing to do with db schema.
You're not allowed to add nullable timestamp column in your tables?
DIH cannot handle delta imports without the last modified column in your tables.
DIH delta works by comparing the last successful timestamp of your build with the last modified column and pick up only those modified later than the last successful build.
e.g.
<entity name="item" pk="ID" query="SELECT * FROM item" deltaImportQuery="SELECT * FROM item
WHERE id = '${dataimporter.delta.id}'" deltaQuery="SELECT id FROM item
WHERE last_modified > '${dataimporter.last_index_time}'">
However, if there is no indicator in the table to indicate the same the delta import would not be able to indentify the new added/updated rows.
Also, the rows deleted should be probably soft deletes to enable identification.
If you do hard deletes the documents would still be in your index, even though removed from the tables.
I have an ios 5 app which does not create any data - it simply makes a GET call to a REST webservice and populates the sqlite database with those records. The initial GET works great when there are no records in the local database. However when I make subsequent calls, I will only be returning a subset of records whose data has changed since the last GET. But what is happening is that the records are just being added again, not updating the existing records.
I have an ID field which is the primary key (or should be) and when a record comes in whose ID already exists, I want that data to be updated. If that ID does not exist, it should be an insert.
I didn't see a way to set my ID field as a 'primary key' in the datamodel in XCode. I tried doing this in my didFinishLaunchingWIthOptions method:
userMapping.primaryKeyAttribute = #"id";
But that alone didn't really seem to do anything.
This is my call to actually perform the GET:
// Load the object model via RestKit
[objectManager loadObjectsAtResourcePath:[#"/synchContacts" appendQueryParams:params] delegate:self];
Which seems to do everything automagically. I am lost at this point as to where I should be putting logic to check to see if the ID exists, and if so do an update vs an insert, or what.
As of the latest RESTKit version (0.23) you can define the primary key like this:
[_mapping addAttributeMappingsFromDictionary:#{ #"id" : #"objectId", #"name" : #"name" }];
[_mapping setIdentificationAttributes:#[ #"objectId" ]];
Whereas objectId is you primary key on the core data object.
You seem to be doing it correctly and when your didLoadObjects callback happens you should be able to query Core Data for the objects you need.
You might be having an issue with the way your fetch requests are being set up. With the latest RestKit you can use RKObjectMappingProvider's
- (void)setObjectMapping:(RKObjectMappingDefinition *)objectMapping forResourcePathPattern:(NSString *)resourcePathPattern withFetchRequestBlock:(RKObjectMappingProviderFetchRequestBlock)fetchRequestBlock;
function and have the fetchRequestBlock fetch the proper data.
RestKit doesn't really handle partial update requests very well out of the box though. You might have more luck on the RestKit google group which is very active.
Quote:
I didn't see a way to set my ID field as a 'primary key' in the datamodel in XCode. I tried doing this in my didFinishLaunchingWIthOptions method:
userMapping.primaryKeyAttribute = #"id";
Keep in mind, the 'primaryKeyAttribute' is the one from your api payload, NOT a CoreData id, which CoreData manages on its own. RestKIt then maps the (invisible) CoreData primary key to the specified JSON key.
I am looking for a query that will work on Sharepoint 2003 to show me all the documents created/touched by a given userID.
I have found tables with the documents (Docs) and tables for users (UserInfo, UserData)
but the relationship between seems a bit odd - there are 99,000 records in our userdata table, and 12,000 records in userinfo - we have 400 users!
I suppose I was expecting a simple 1 to many relationship with a user table having 400 records and joining that to the documents table, but I see thats not the case.
Any help would be appreciated.
Edit:
Thanks Bjorn,
I have translated that query back to the Sharepoint 2003 structure:
select
d.* from
userinfo u join userdata d
on u.tp_siteid = d.tp_siteid
and
u.tp_id = d.tp_author
where
u.tp_login = 'userid'
and
d.tp_iscurrent = 1
This gets me a list of siteid/listid/tp_id's I'll have to see if I can trace those back to a filename / path.
All: any additional help is still appreciated!
I've never looked at the database in SharePoint 2003, but in 2007 UserInfo is connected to Sites, which means that every user has a row in UserInfo for each site collection (or the equivalent 2003 concept). So to identify what a user does you need both the site id and the user's id within that site. In 2007, I would begin with something like this:
select d.* from userinfo u
join alluserdata d on u.tp_siteid = d.tp_siteid
and u.tp_id = d.tp_author
where u.tp_login = '[username]'
and d.tp_iscurrentversion = 1
Update: As others write here, it is not recommended to go directly into the SharePoint database, but I would say use your head and be careful. Updates are an all-caps no-no, but selects depends on the context.
DO NOT QUERY THE SHAREPOINT DATABASE DIRECTLY!
I wonder if I made that clear enough? :)
You really need to look at the object model available in C#, you will need to get an SPSite instance for a SiteCollection, and then iterate over the SPList instances that belong to the SPSite and the SPWeb objects.
Once you have the SPList object, you will need to call GetListItems using a query that filters for the user you want.
That is the supported way of doing what you want.
You should never go to the database directly as SharePoint isn't designed for that at all and there is no guarantee (actually, there's a specific warning) that the structure of the database will be the same between versions and upgrades, and additionally when content is spread over several content databases in a farm there is no guarantee that a query that runs on one content database will do what you expect on another content database.
When you look at the object model for iteration, also note that you will need to dispose() the SPSite and SPWeb objects that you create.
Oh, and yes you may have 400 users, but I would bet that you have 30 sites. The information is repeated in the database per site... 30 x 400 = 12,000 entries in the database.
If you are going to use that query in Sharepoint you should know that creating views on the content database or quering directly against the database seems to be a big No-No. A workaround could be some custom code that iterates through the object model and writes the results to your own database. This could either be timer based or based on an eventtrigger.
You really shouldn't be doing SELECTs with Locks either i.e. adding WITH (NOLOCK) to your queries. Some parts of the system are very timeout sensitive and if you start introducing locks that the system wasn't expecting you can see the system freak out.
But really, you should be doing this via the object model. Mess around with something like IronPython and experimentations with the OM are almost downright pleasant.