Polarion: Is it possible to create custom jobs to be used by the scheduler - alm

I know about the preconfigured jobs that exist for polarion. I now would like to add a custom job to delete specific work items that are created before a specific date. Exactly like the "polarion.jobs.testruns.delete" job but for other work items.
Is it possible to create a custom job that i can add to the scheduler in polarion? And if yes, how would i approach such a task?

This has been answered here in the Siemens Polarion forum.
Chapter 6.1.3 of the SDK Guide provides instructions and an example on
how to create custom jobs. The SDK Guide can be found here.

Related

Using Terraform as an API

I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.

Ant script for message broker monitoring

Context
I want to develop an automated script for broker (IIB9/10) resource monitoring, capturing information about broker running status, message flows deployed, jvm usage, number of threads running, etc.
The initial thought is to have a report generated using scripts and then displayed over a browser.
Question
Can this be entirely done using only Ant scripts (i am not sure as have not explored iterative processing in Ant in detail) or a combination of Ant and batch/shell scripts is the best bet?
I know Web user interface in IIB10 does most of it but i want to add some features.
I suggest you to take a look at message flow statistics and accounting:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/ac19100_.htm?lang=en
This is a feature of IIB by which it is capable of emitting resource statistics. The statistics are published to a topic in a well defined XML format. I would try solving your requirement by writing an application to read these messages and use the data in them to generate your graphs or other reports.
There is a support pack, IS03 which can give you an idea of such an application.
This will not cover everything you mentioned, for example monitoring what flows are deployed cannot be achieved like this, but it gives a comprehensive view of the load and performance of your applications:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bj10440_.htm?lang=en
And there is a resource statistics feature as well for monitoring resources used by your applications:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bj43310_.htm?lang=en
To get everything you will need a variety of tools I think. You can use Resource Stats and Accounting / Stats as suggested by Attila to get JVM and thread usage. The Broker publishes updates to a topic so you can create a simple subscriber to grab that info.
For deploy related info, stop / start state and so forth I would be looking at building simple Integration API or REST API applications to call from ant.
You can find documentation for these API's here:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_10.0.0/com.ibm.etools.mft.doc/be43410_.htm?lang=en
and here:
http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/SSMKHH_10.0.0/com.ibm.etools.mft.restapi.doc/index.html

How to integrate a deployment automation tool into puppet?

We are a mixed linux/windows shop that successfully adopted Puppet for Config Mgmt a while ago. We'd like to drop ansible in as our deployment orchestration tool (research suggests that puppet doesn't do this very well) but have questions about how to integrate the two products.
Today, puppet is the source of truth with respect to environment info (which nodes belong to which groups etc). I want to avoid duplicating this information in ansible. Are there any best practices with regards to sharing environment information between the two products?
One way to reduce the amount of duplicated state between the systems is to use Ansible's "Dynamic Inventory" support. Instead of defining your hosts/groups in a text file, you use a script that pulls the same data from somewhere else. This could be PuppetDB, Foreman, etc and is going to depend on your environment.
Writing a new script is also pretty simple, it just needs to be any executable (bash/python/ruby/etc) that returns json in a specific format.
Lastly, it is possible to roll out new releases with puppet, but it is easier with a "microservice" like release process. Ensuring apps/services/databases remain backwards compatible across versions can make pushing out releases trivial with puppet and your favorite package manager.
Using Puppet and Mcollective should be the way to go if you are looking for a solution from puppetlabs
https://puppetlabs.com/mcollective

Scheduling tasks in Microsoft CRM 2011

I need to schedule the import/ export of contacts in Microsoft CRM 2011 (online and on premises).
I plan to create a custom entity to store the scheduled tasks, and a form to set them up (similar to Windows Task Scheduler).
I am not sure how can I actually execute the scheduled tasks. Does CRM 2011 have a service or API I could use to schedule tasks? The solution must work in CRM 2011 online and on premises. Thank you so much.
Directly from a former Microsoft product team member (Gonzalo Ruiz),
there is no Out-of-the-Box scheduling engine in CRM.[1]
So the answer is no. I recently asked a similar question, and for several reasons, our team decided the best way to go was solution 1: an external task manager (Windows has a few native solutions for this), which would work for both on-premise and online versions. Drawback: you should probably have a reliable server-type machine that you could host the task manager on.
As linked, you can use solution 2, recurring workflows, to achieve a similar result, but there are some drawbacks to this route as well, some of which are mentioned in Gonzalo's blog.
As Peter mentioned use of recurring workflows can help here. Setting up a workflow as a child workflow which calls itself after a suitable timeout can create the required conditions.
You can potentially have a configuration entity within CRM which stores the "time to next run" and the workflow can be triggered to run on update of this attribute (this can be useful if the scheduling period is likely to be non-linear). If the timescales are linear then you can just implement the required timescales in the workflow or have the workflow can update the aforementioned attribute before completion so that the child invocation waits for the appropriate time period.

Monotouch: Example of an implementation of a finite length task executing when exiting iOS application?

In Apple's docs I have found an article how to execute a finite-length task when exiting the application. I am looking for a way to adopt that in MonoTouch.
The idea is to process some data if the user pushes the app into background, but that processing takes longer than the time I'm granted by default, hence I want to use the functionality describe here: http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/BackgroundExecution/BackgroundExecution.html to get more time.
How does the code translate into MT? Has anybody an example?
I have a blog post with an example for MonoTouch: http://software.tavlikos.com/2010/12/08/multitasking-in-ios-the-monotouch-way-part-i/

Resources