In Cognos environment there are many reports scheduled to be delivered to user's email id. How can I track all those reports without doing lot's of manual work.
Using audit package's Run Jobs and Run Reports I am not able to track reports which are scheduled to email burst as Run Report doesn't provide information on mode of bursting of a report.
Thanks in advance.
The recipients are not directly stored in the content store anywhere... they are pulled from a query within the report.
If the report is saving output, you can set it to retain the saved outputs for a few days. Every version that was sent out will be saved.
Outside that, you will have to do manual work. Get into the report, find the query that feeds into the burst, and run the query manually to see who it is built for.
If you have multiple burst reports in your environment, consider switching to use a table that stores all of your burst recipients. Then performing monitoring/maintenance of burst users will be a simple query away.
Related
I use Power Automate to refresh a PowerBI dashboard when I update its input files on Sharepoint.
Since the inputs are generated via another automated process, the updates follow each other very closely, which triggers the start of one Power Automate job per updated file.
This situation leads to failed jobs since they are all running at the same time. Even worse, the first job succeeds but the refresh then happens when not all files are updated yet. It is also a waste, since I would only need one to run.
To accomodate for that, I tried to introduce a delay in the job. This makes sure that when the first job runs, refreshing powerBI will work. Howerver, the subsequent runs all fail, so I would still like to find a way not to run them at all.
Can anyone guide me in the right direction ?
For this requirement, you can set the Share Point trigger can run only 1 instance at same time. Please refer to the steps below:
1. Click "..." button of the trigger and click "Settings".
2. Enable "Concurrency Control" limit and set Degree of Parallelism as 1.
Then your logic app can not run multiple instances at same time.
Just checked my Kentico database (Azure hosting) and it ballooned to 21GB. This happened fairly recently since 4 months ago it was just a bit above 1GB.
Checked the tables and my Event Log table has over 2,000,000 entries!!!
Nothing has changed recently, my settings under Settings -> System -> Event Log are still the same:
Event Log Size: 1000
Since globals are also set to 1000, usually I have 2000 or so entries in the event log table.
Anyone knows what happened here? And how to stop it from happening?
If you have online marketing turned on and have a popular site, there will be lots of data in the OM_ tables. But still 20GB sounds huge, are there lots of asset files been added to the Content Tree, like videos? Also, is the database set to log all transactions? What kind of log files are popular? Errors or Information logs? Do you have some custom code that could produce lots of logs?
You can also email Kentico support to get a "check big tables" SQL script which can help you find out which are the large tables.
You should look into a few other areas as well. Having 2MM event log records won't cause a 20GB jump in DB size from a Kentico perspective as the event logs are pretty minimal data.
Take a look in the analytics, version history, email queue, web farm and scheduled task tables. Also check out the recycle bin. Are you integrating with any other system or inserting/updating a lot of data via the API? If so, this could cause a lot of transactional log files to build up. With Azure SQL I don't know of a way to clean those up.
My suggestion is to check other tables and not just the event log. Maybe query the event log manually via SSMS and see what the top 100 events are and that might help you find the problem. If you need to you could probably clear the log too either through the UI or manually truncate the table using SSMS.
In Kentico (9) when I run the task "Delete inactive contacts" it never actually runs and the result is always "Rescheduled to delete more contacts in next off-peak period"
I've tried changing the settings to run once a week and I've tried creating a custom IDeleteContacts then setting it to use that custom class, but I always get the same result.
Any ideas?
By default, Kentico runs it's scheduled tasks in the tail of regular web requests. That's fine if you have traffic 24/7. If you don't, then you can run into all kinds of nastyness including the issue you're describing now because scheduled tasks are not executing.
If you're running on a Windows server you can setup a service to trigger scheduled tasks. If that's not an option, you can setup monitoring to hit your site every couple of minutes, for example UptimeRobot or Application Insights. You'll get the added bonus of being notified whenever the site goes down.
If you really need to clean up the EMS contacts because it's getting out of control, you can access the database directly and trigger the same stored procedure that the scheduled task uses. It's called [Proc_OM_Contact_MassDelete] and takes a where clause and a batch size. The where clause is where you specify the delete policy. For example
ContactCreated < GETDATE()-60 AND ([ContactEmail] IS NULL PR [ContactEmail]='')
With this where clause the stored proc would process contacts that were created over 60 days ago and don't have an e-mail address yet.
Please be aware that large volumes of EMS data will require database index tuning for this procedure to run within an acceptable period of time. This is true for EMS in general when your site has a decent amount of traffic.
If the standard Kentico cleanup doesn't work, for example because the database is unable to deal with millions of contacts, we've written a script to purge all EMS data. Use with caution ;)
do you have applied the latest hotfix (9.0.50) on your project? There was a bug when the deletion of inactive contacts took longer than 1 minute, the next run of the "Delete inactive contacts" scheduled task was not set, and the task did not execute again. You can download the package directly from this page: https://devnet.kentico.com/download/hotfixes
The "Delete inactive contacts" scheduled task only runs between 2am and 6am based on the servers time the site is running on. You can see this in the documentation. It only ever deletes a batch of 1000 contacts and never more. If you want to "trick" the site into running the scheduled task more, update the time on the server to 1:58am and restart the site.
In Cognos, I would like to write 100 tables output to 100 flat files. I am planning to create a Project, which will be scheduled everyday. When it runs, it will write tables output to flat files.
Can you please tell me whether this is possible or not. If so, can you please specify the approach to solve this problem?
Thanks
Ram
Create a report for each table. I presume you know how to do it.
On a Cognos Connection portal create a job with all your reports and set up out put for all your reports.
Schedule execution of your new job.
1 task described in Report Studio User guide (And Framework Manager Developer Guide if you don't have model for your table)
2-3 in Cognos connection User Guide. Check out "Use Jobs to Schedule Multiple Entries"
I have recently just started working with firebird DB v2.1 on a Linux Redhawk 5.4.11 system. I am trying to create a monitor script that gets kicked off via a cron job. However I am running into a few issues and I was hoping for some advice...
First off I have read through most of the documentation that come with the firebird DB and a lot of the documentation that is provided on their site. I have tried using the gstat tool which is supplied but that didn't seem to give me the kind of information I was looking for. I ran across README.monitoring_tables file which seemed to be exactly what I wanted to monitor. Yet this is where I started to hit a snag in my progress....
After running from logging into the db via isql, I run SELECT MON$PAGE_READS, MON$PAGE_WRITES FROM MON$IO_STATS; I was able to get some numbers which seemed okay. However upon running the command again it appeared the data was stale because the numbers were not updating. I waited 1 minute, 5 minutes, 15 minutes and all the data was the same during each. Once I logged off and back on to run the command again the data changed. It appears that only on a relog does the data refresh and yet I am not sure if even then the data is correct.
My question is now am I even doing this correct? Are these commands truly monitoring my db or are just monitoring the command itself? Also why does it take a relog to refresh the statistics? One thing I was worried about was inconsistency in my data. In other words my system was running yet when I would logon each time the read/writes were not linearly increasing. They would vary from 10k to 500 to 2k. Any advice or help would be appreciated!
When you query a monitoring table, a snapshot of the monitoring information is created so the contents of the monitoring tables are stable for the rest of the transaction. You need to commit and start a new transaction if you want fresh information. Firebird always uses a transaction (and isql implicitly starts a transaction if none was started explicitly).
This is also documented in doc/README.monitoring_tables (at least in the Firebird 2.5 version):
A snapshot is created the first time any of the monitoring tables is being selected from in the given transaction and it's preserved until the transaction ends, so multiple queries (e.g. master-detail ones) will always return the consistent view of the data. In other words, the monitoring tables always behave like a snapshot (aka consistency) transaction, even if the host transaction has been started with another isolation level. To refresh the snapshot, the current transaction should be finished and the monitoring tables should be queried in the new transaction context.
(emphasis mine)
Note that depending on your monitoring needs, you should also look at the trace functionality that was introduced in Firebird 2.5.