MAXIMO Asset Management (MAM) version: 7.6.1.2:
In Work Order Tracking, I can enable flow control on a work order (WO.FLOWCONTROLLED=1).
I'm trying to figure out what happens behind the scenes when flow control is enabled -- so that I can understand how it might impact other processes (i.e. wokflow). For example, by doing some random tests, I've observed that it does the following:
WO can't be changed to complete until all tasks are complete
When the user completes all tasks, the WO automatically changes to complete
It's possible that it does other things too -- but I have no way of knowing.
I can't find any specific information in the documentation about what actually happens when WO.FLOWCONTROLLED=1. I've also asked IBM support, but haven't gotten a clear answer there either.
What happens when WO.FLOWCONTROLLED is enabled?
The following link should help clarify how the Flow Control Feature works and is configured in Maximo:
Understanding and Configuring Maximo’s Flow Control Feature
Related
I am currently testing the NSPersistentCloudKitContainer.
I have strictly followed the guidelines of the new documentation. Basically everything works as desired. I use the option NSPersistentStoreRemoteChangeNotificationPostOptionKey on the description to receive Updates from the remote data store. But the updates from the remote database are only delivered if the app is in the foreground. But I would like to update my widget based on a data change in the backend.
Does anyone has an idea how to solve this issues?
What I did so far:
Background Modes in Capabilities are enabled
Push Notifications are enabled
i called registerForRemoteNotifications
HistoryTracking and RemoteChange Option are enabled on the description of the PersistentStore
Syncing works in foreground ✅
Syncing does not work if App is in Background ❌
Edit: 09.09.2020
It seems that there is nothing that we can do at the moment.
Apple Developer Support answered my question some days ago
Thank you for contacting Apple Developer Technical Support (DTS).
The behavior and resulting limitations you describe are by design.
If you believe an alternative approach should be considered by Apple, we encourage you to file an enhancement request with information on how this design decision impacts you, and what you’d like to see done differently.
Although there is no promise that the behavior will be changed, it is the best way to ensure your thoughts on the matter are seen by the team responsible for the decision.
While a Technical Support Incident (TSI) was initially debited from your Apple Developer Program account for this request, we have assigned a replacement incident back to your account.
The bigger goal:
Writing a batch user manager targeted at classroom school environments.
The problem
I want to write a user manager that uses a GUI to add, manage and delete users for classroom environments. The program I'm working on is ltsp-manager.
Up until now all the user management is done by executing bash commands. From a python script. Meaning all the GUI has to run as root and everything is handcrafted.
The goal
Create a Dbus service that handles all the account management and let the GUI run as a regular user requiring a password from time to time.
I looked around and found that in org.freedesktop.Accounts there is already a service doing a lot of the functionality I want to do. However, it also lacks some. Something that is totally missing is the management of Groups.
What is a good way to use the org.freedesktop.Accounts functionality and add some additional functions/methods?
Thoughts so far
Things that came to my mind include:
just redo everything - meaning a lot of duplicated work.
copy the interfaces and write functions that call the original ones
write a service that only implements the additional functions without touching the original ones. The client will then use the original service and the newly written one.
All my testing experiments are done with python3 and pydbus which seems to be the best choice among many.
I have never written a real world dbus service - though the experiments do show some results in d-feet. This question is not really a what do I need to type kind of question but rather a best practise question.
The best long-term answer would be to fix accountsservice upstream to implement groups support. There’s already work towards that; it just needs someone to pick it up and finish it off. accountsservice is the project which provides the canonical implementation of org.freedesktop.Accounts.
The other approaches are bad because:
just redo everything - meaning a lot of duplicated work.
As you say, this is a lot of duplicated work, and then you have to maintain it all.
copy the interfaces and write functions that call the original ones
That means you have to forever keep up with changes and additions to accountsservice.
write a service that only implements the additional functions without touching the original ones. The client will then use the original service and the newly written one.
That doesn’t come with any additional maintenance problems, but means your service won’t integrate well with accountsservice. There might be race conditions between updates on your D-Bus objects and updates on the accountsservice objects, for example. You won’t be able to share the maintenance burden of the groups code with the (many) other users of accountsservice.
I am looking for suggestions to implement a message pause/resume pattern on spring-xd/integration platform. I need to be able to park a set of messages and then resume them based on some user driven input. I need more fine grain control vs. just shutting down an endpoint programmatically via ControlBus for example.
I looked at the Delayer endpoint and that would work to enable pausing messages on the fly based on some business logic. What I am having trouble figuring out is how to resume them on demand.
I tried to look into the TaskScheduler to see if it provided some alternatives to programmatically provide access to the tasks and/or force the execution. It was not clear to me if I could change the delay at runtime?
Any suggestions would be appreciated.
Thanks,
Mark
I am looking to create a simple mobile agent system which will deal with 4 tasks, i.e 4 different mobile agents jobs: Database update, meeting scheduling, network services discovery and kernel update.
I have done my research and have seen different frameworks such as Aglet, Jade, agent builder etc. My question is which one should i use? Also i need to setup the base code for it to work, can someone point me to a site or help me to setup the basic functions of the mobile agent?
I've read about tahiti server for the Aglet model. I'm quite confused about how to set up the mobile agent system. Any help would be much appreciated.
I have also tried to it using RMI. I had created a method of type agent, but i couldn't pass it through remote method implementation. I was reading about tcp and udp socket programming. I was thinking may be it would be more fair to do it using socket programming. In this case, would this be called an agent? I was thinking about the server sending datagram packets to multiple clients.
You need to ask yourself why you want to use mobile agents at all. The notion of a mobile agent was popular in the agent research community in the early 90's, but fell out of favour because (i) it wasn't clear what problem it was solving, (ii) the capability to allow arbitrary code to migrate to a particular computer and execute with enough privileges to access local data and services is very open to abuse, and (iii) all of the claimed benefits of mobile agents can actually be achieved though web services (REST or otherwise) and open data formats such as RDF. Consequently, few, if any, mobile agent platforms have been properly maintained since the early experiments.
It also sounds as though you need to be clear which end-user problem you want to solve. Scheduling a meeting and updating my kernel are very different tasks - I'd be very uncomfortable with a program that claims do both. If your interest is in the automation of system maintenance tasks, such as DB tuning and kernel patching, on large networks you might want to look at the SmartFrog project, or read up on autonomic computing.
I use JADE and I agree with the first guy, agent systems usually take alot of overhead to going so if you can avoid it, please do. If however you choose to proceed choose a platform with alot of support and a big user group.
Jade has some neat features like a directory facilitator DF, which works like a yellow pages so other agents don't have to know what agents are running and what services are supplied they can simply inquire by the DF.
Also JADE ContractNetBehaviours help simplify communication.
For some time I have been looking at the possibility to integrate PowerShell as a scripting engine in SharePoint but I haven't found the right solution yet.
My main objective is to enable event triggers in e.g. a list to call and execute a PowerShell script (by filename) on the local server. This would give me a lot of flexibility compared to using an ordinary event handler written in visual studio, but the question is whether it is possible and whether I have overlooked some serious security issues?
Since each and every unique idea that I come up with in many years have already be invented by somebody else, I might have missed an existing product/project so any links to such projects will be appreciated, thanks
In the spirit of "already being invented by somebody else", check out http://www.codeplex.com/iLoveSharePoint for some very interesting uses of PowerShell inside SharePoint. Some great code samples and documentation. Haven't tried myself yet, but seems interesting.
I see what you're trying to achieve, but there's something that just doesn't "feel right" about a user indirectly running script code on your server.
The key difference is that the script can be run by anyone logging into the server. Event handlers can only be run by SharePoint. Strict validation of any inputs would be essential. You should also ensure the script is signed so tampered scripts won't execute.
Also, scripts by their nature aren't really designed for enterprise solutions. There is less opportunity for best practices such as good software architecture, design patterns, source control, code analysis, unit testing, and reuse of code. It's also messy/difficult to share code with a common code base that contains web parts, controls, entities, etc.
Finally, introducing PowerShell means another technology to be maintained in the mix we already have with SharePoint. This might be OK if you are comfortable with it.
Depending on how much customisation has already been done or is planned for the future some of the points above may not matter. Be sure to think about how this idea would feel if implemented 6, 12 and 24 months down the track.