Automatically load multi-assets from a saved query into a PM'd WO - maximo

Maximo 7.6.1.1:
I have PMs that generate WOs.
And I have saved asset queries that pertain to each of the PMs.
I plan to store the saved asset query names in a SAVEDQUERY field in the respective PMs.
Upon WO creation (via a PM), I want to automatically load the assets from the associated saved query into the multi-asset section of the WO.
Is it possible to do this?

Yes, it is possible. Use an Attribute-RunAction launch point on WORKORDER.PM to run a script that navigates back to the PM to get the saved query name then load that saved query's where clause and run it against the ASSET object. The caution is that the user the script will run as may have different permissions and data restrictions than the user who owns (and presumably tested) the saved query.

Related

Is it possible to link the opportunity and project with a different constituent in NetSuite?

i am not able to populate the system field "Prject" on opportunity record if the project and opportunity records have a different constituent
I have created the project from opportunity, but by some reason the opportunity has not been linked with the created project. I tried to connect them(populate the project field on opportunity) using a script but get the error: Invalid job reference key for the entity . It seems that netsuite doesn't allow project to be linked with the opportunity with a different client. Is there a way to populate the program field "Project" on opportunity and link the record with each other without changing the constituent on the records?
You would have to create a new Custom body or column field that would allow you to select any Project (regardless of which constituent/institution it is attached to). From there make considerations for why you want the 2 linked, and note that NetSuite will not complete any automation based on this custom field. You will have to make other modifications depending on business needs.

Variables being overwritten when submitting forms in Python (using Flask)

Background: I'm using python 3 (with Flask and Bootstrap) to create a website with fields to capture data. There is an API running from a HighQ database to take data from a record, and once the form is submitted, put updated / new data back into the same record in the HighQ database (and also to an SQL database). This is all working successfully.
This is a tool that is being used internally within the company I work for, and hosted on the intranet, so I haven't set up user logins as I want it to be as quick and easy as possible for the team to use this form and update records.
Issue: When two or more instances of the form are being used at the same time (whether it's on one persons computer, or if two people are testing on their own computers) the data submitted from one person's form will overwrite the destination record that the other person has called into their form. I'm struggling to find the solution that ensures each time the form is launched, it ringfences the data so that this does not happen.
I've tried a number of things to resolve this (including lots of stackoverflow searches):
Originally I used global variables to pass data across different routes. I thought the issue was that everyone launching the form could access the same global variables, so I changed the code to remove any global variables, and instead used a combination of taking content within each of the fields in the site, and saving it as a local variable within the route I needed it in, and for any variables that were not obtainable from a field, I saved them to the 'session'.
The problem is still persisting and i'm now at a loss on what to try next. Any help on this would be much appreciated.

kibana - programatically return saved search objects and associated data via REST API

I am currently working on an excel export tool for kibana using node.js. Right now I am trying to figure out if it is possible to export the data associated with a saved search within my selected kibana index.
Here is an example of what I am trying to do:
User provides authorization and selects a project within kibana that they have access to.
Once a user has selected a project, any saved searches associated with that project are populated into the UI.
The user selects a saved search, report name, and date range, and submits the form. The application then makes a request to the kibana index and returns the data associated with the selected search and within the given time range.
I have finished the authorization and UI, but I am currently stuck trying to figure out how to return the saved search objects within a specific project. I am also unsure of how to construct the request to the kibana index that would return the data associated with the selected saved search within the given time frame.
Does anyone have any experience with something similar to this? I am also very new to Elasticsearch, is this sort of functionality possible?
Answered by a wonderful Elastic team member here:
https://discuss.elastic.co/t/exporting-saved-search-data/90843

Update Kentico document field regardless of versioning

I have a field on one of my base page types which I need to update programmatically from an external data feed, so that it can be included in my Smart Search index.
The documents are versioned, but I want to update the published value of this field regardless of checkout state, and obviously avoid any sort of overwrite when documents are checked in.
The field will not appear on the editor form -- or ideally, would conditionally display for Global Admins.
It appears that using the API to update the document without doing a CheckOut fails silently. However if I do a Checkout/Update/CheckIn on a checkout-out page, the author will lose their work I assume?
Any way to handle this "versionless" field via the Kentico data model and API?
I don't think there is a way around updating checked out pages. You can update the page type table directly, but as you mentioned, it will be overwritten when they check in. You could update the version history I believe to make changes to the current data that is checked out, but again, I think that will be lost if the user cancels.
The only way I can think of to solve your issue is to create another table that maps the values you want to the page. Then you don't have to worry about the pages being checked out, you just need to grab the documentID or something. Since the value isn't displayed to the editor, you just have a field that does a lookup on this table.
The preferred and right way is using the API but as you stated, it causes problems if a user has something already checked out and working on it or it's in workflow and not published yet.
If the field you're updating is page type specific, there is one thing specifically I can think of and that's going directly to the database to the page type's database field and perform an update to that field.
Note: this is not recommended unless you know specifically what you're doing and have done full testing on it
The down side of going direct to the database is this will not update the current version since you're using check in/out and workflow. You will also need to update the checked out and current version which means you need to:
Go to the Document itself in the cms_documents table and get the document you are working with.
Then using the fields DocumentCheckedOutVersionHistoryID and DocumentPublishedVersionHistoryID' you can get the version history IDs of the document from theCMS_VersionHistory` table.
Then you can perform an update to the CMS_VersionHistory and your custom page type fields.
You will then need to look in the CMS_WorkflowHistory table and find out if that document is in workflow and in what step.
After you have that workflow history step, use the VersionHistoryID field to go back to the CMS_VersionHistory table and update that record with your data.
Again, not an elegant solution since you are using check in/out and workflow but after some trial and error and testing you should be able to figure it out.
UPDATE
You may also be able to add a custom table or some other linked database table which will allow you to create a global handler. The linked table would be where you perform your updates via API and other calls without versioning or workflow. Then when a user updates a specific page type you could do a check to see when the last time that linked table was updated and update the field(s) you need on update of that particular page (of course by node and document IDs).
Unfortunately you'll have to check it in and out with API. See examples here.
Also you might need to publish it in order to reflect changes on the live site.

Batch Commit for Inserting users in Liferay

We are trying to insert about 100k users in Liferay. Is there a way to have this all updated in one batch commit, instead of making separate calls to add each user?
It think yes it's possible.
Build a custom remote service entity like BulkUserServiceUtil.addUsers, within it call the standard method UserLocalServiceUtil.addUser for each user.
Returning from the BulkUserServiceUtil method the transaction is committed.
#sandeep:
Yes, Liferay not provide us to add/update bulk users, because after user creation some table affected in that and also user indexed, but if you want to do that I have two suggestions :
Take a reference of REINDEX option of articles: in that case you can create a batch of counter range with some sort of value and update/add that batch, but the thing is Liferay internally call addUser default. So its iterative way which you can use.
Without Service : Create some custom script and directly hit the DB for once. which create users but in that case you have to take care of other liferay tables which need to be insert the userID or respective data.

Resources