I have a report configured for caching via Scheduled Updates. The caching appears to complete after about 2 hours, but when I try to run the report (and I've double-checked that I'm trying to run it from the same location that the scheduled update references), it begins trying to load the entire dataset instead of pulling from the cache. Any suggestions on what might be causing spotfire to ignore the cached data set?
This issue was due to the lack of a schedule on the rule. Thank you to Mark P. for directing me to the schedules and explaining how they work!
Related
I use Power Automate to refresh a PowerBI dashboard when I update its input files on Sharepoint.
Since the inputs are generated via another automated process, the updates follow each other very closely, which triggers the start of one Power Automate job per updated file.
This situation leads to failed jobs since they are all running at the same time. Even worse, the first job succeeds but the refresh then happens when not all files are updated yet. It is also a waste, since I would only need one to run.
To accomodate for that, I tried to introduce a delay in the job. This makes sure that when the first job runs, refreshing powerBI will work. Howerver, the subsequent runs all fail, so I would still like to find a way not to run them at all.
Can anyone guide me in the right direction ?
For this requirement, you can set the Share Point trigger can run only 1 instance at same time. Please refer to the steps below:
1. Click "..." button of the trigger and click "Settings".
2. Enable "Concurrency Control" limit and set Degree of Parallelism as 1.
Then your logic app can not run multiple instances at same time.
I tried caching an information link by using the Spotfire Analyst caching option in Information Designer,
have set the timer to 7200s and don't have an validation query.
Unfortunately when I try to open/import the data it still tries to load them from scratch.
I know that for the first time, you need to wait for the data to load but even after after the first when I try to load them again it still brings them from from scratch so I have to wait 4-5 minutes for 4GB of data to load.
I checked the Spotfire server logs and it seems that it uses the cache but I don't know why it takes so much time.
Is there anything I can do to figure out what's happening?
Sometimes this is because the settings in the server configuration tool, for the Attachment Manager, are left at default, which limits the overall time period and the total amount of storage the caching can consume on the server. I find these values too low and increase them substantially.
So I have a postgres database that I have installed an audit table - source https://wiki.postgresql.org/wiki/Audit_trigger_91plus
Now my question is as follows:
I have been wanting to create a sort of stream that notifies me of any changes that have been made by any application that has access to my DB. Now, I know that I can create a trigger and a pub/sub via pg but that will take up performance time and that is something that can become significant as the DB scales.
So instead of slowing the actual DB I was wondering if I were to do the same NOTIFY/LISTEN functionality I would've on the main tables but instead install it on the audit tables.
Has anyone ever done this? If so what have you experienced, pros? cons?. Or if anyone knows why I should or should not do this can you please let me know.
Thanks
Via NOTIFY/LISTEN, the PRO-s:
Light communications with the server, no need to pull for the data changes.
Via NOTIFY/LISTEN, the CON-s:
The practice shows that it is not sufficient just to set it up and listen for the events, because every so often the channel goes down, due to various communication problems. For a serious system you would need to put in place an additional monitoring service that can verify that your listeners are still operating, and if not - destroy the existing ones and create new ones. This can be tricky, and you probably won't find a good example of doing it.
Via scheduled data pulls, PRO-s:
Simplicity - you just check for the data change according to the schedule;
Reliability - there is nothing to break there, once the pull implementation is working.
Via scheduled data pulls, CON-s:
Additional traffic for the server, depending on how quickly you need to see the data change, and how would that interfere (if at all) with other requests to the server.
I have recently just started working with firebird DB v2.1 on a Linux Redhawk 5.4.11 system. I am trying to create a monitor script that gets kicked off via a cron job. However I am running into a few issues and I was hoping for some advice...
First off I have read through most of the documentation that come with the firebird DB and a lot of the documentation that is provided on their site. I have tried using the gstat tool which is supplied but that didn't seem to give me the kind of information I was looking for. I ran across README.monitoring_tables file which seemed to be exactly what I wanted to monitor. Yet this is where I started to hit a snag in my progress....
After running from logging into the db via isql, I run SELECT MON$PAGE_READS, MON$PAGE_WRITES FROM MON$IO_STATS; I was able to get some numbers which seemed okay. However upon running the command again it appeared the data was stale because the numbers were not updating. I waited 1 minute, 5 minutes, 15 minutes and all the data was the same during each. Once I logged off and back on to run the command again the data changed. It appears that only on a relog does the data refresh and yet I am not sure if even then the data is correct.
My question is now am I even doing this correct? Are these commands truly monitoring my db or are just monitoring the command itself? Also why does it take a relog to refresh the statistics? One thing I was worried about was inconsistency in my data. In other words my system was running yet when I would logon each time the read/writes were not linearly increasing. They would vary from 10k to 500 to 2k. Any advice or help would be appreciated!
When you query a monitoring table, a snapshot of the monitoring information is created so the contents of the monitoring tables are stable for the rest of the transaction. You need to commit and start a new transaction if you want fresh information. Firebird always uses a transaction (and isql implicitly starts a transaction if none was started explicitly).
This is also documented in doc/README.monitoring_tables (at least in the Firebird 2.5 version):
A snapshot is created the first time any of the monitoring tables is being selected from in the given transaction and it's preserved until the transaction ends, so multiple queries (e.g. master-detail ones) will always return the consistent view of the data. In other words, the monitoring tables always behave like a snapshot (aka consistency) transaction, even if the host transaction has been started with another isolation level. To refresh the snapshot, the current transaction should be finished and the monitoring tables should be queried in the new transaction context.
(emphasis mine)
Note that depending on your monitoring needs, you should also look at the trace functionality that was introduced in Firebird 2.5.
my drupal cache tables are always empty. Cron is running ok. How is it possible? What can I do, so they are receiving data?
Thanks
Go to Site configuration -> Performance -> Caching mode: Normal
You only want to turn on caching for production. If you are a developer and you want to test things, make sure caching is turned off.
Running Cron will actually clear the cache tables. So if you have cron set up to run at a high frequency (every few hours) you're cache tables will get empty as well. Keep in mind that system_cron() calls cache_clear_all().
So make sure:
caching mod: Normal
cron is runngin every day (should be adequate for most sites)
But Drupal also caches some other things like:
CSS files (useful for IE which only loads the first 31 stylesheets, while a Drupal site will definitely have more than that)
JavaScript files - never used it, not even in production, but it should make sense for a high traffic web-site