I have recently just started working with firebird DB v2.1 on a Linux Redhawk 5.4.11 system. I am trying to create a monitor script that gets kicked off via a cron job. However I am running into a few issues and I was hoping for some advice...
First off I have read through most of the documentation that come with the firebird DB and a lot of the documentation that is provided on their site. I have tried using the gstat tool which is supplied but that didn't seem to give me the kind of information I was looking for. I ran across README.monitoring_tables file which seemed to be exactly what I wanted to monitor. Yet this is where I started to hit a snag in my progress....
After running from logging into the db via isql, I run SELECT MON$PAGE_READS, MON$PAGE_WRITES FROM MON$IO_STATS; I was able to get some numbers which seemed okay. However upon running the command again it appeared the data was stale because the numbers were not updating. I waited 1 minute, 5 minutes, 15 minutes and all the data was the same during each. Once I logged off and back on to run the command again the data changed. It appears that only on a relog does the data refresh and yet I am not sure if even then the data is correct.
My question is now am I even doing this correct? Are these commands truly monitoring my db or are just monitoring the command itself? Also why does it take a relog to refresh the statistics? One thing I was worried about was inconsistency in my data. In other words my system was running yet when I would logon each time the read/writes were not linearly increasing. They would vary from 10k to 500 to 2k. Any advice or help would be appreciated!
When you query a monitoring table, a snapshot of the monitoring information is created so the contents of the monitoring tables are stable for the rest of the transaction. You need to commit and start a new transaction if you want fresh information. Firebird always uses a transaction (and isql implicitly starts a transaction if none was started explicitly).
This is also documented in doc/README.monitoring_tables (at least in the Firebird 2.5 version):
A snapshot is created the first time any of the monitoring tables is being selected from in the given transaction and it's preserved until the transaction ends, so multiple queries (e.g. master-detail ones) will always return the consistent view of the data. In other words, the monitoring tables always behave like a snapshot (aka consistency) transaction, even if the host transaction has been started with another isolation level. To refresh the snapshot, the current transaction should be finished and the monitoring tables should be queried in the new transaction context.
(emphasis mine)
Note that depending on your monitoring needs, you should also look at the trace functionality that was introduced in Firebird 2.5.
Related
I have recently deployed PostgreSQL database to Linux server and one of the stored procedure is taking around 24 to 26 second to fetch the result. Previously i was deployed PostgreSQL database to windows server and the same stored procedure is taking around only 1 to 1.5 second.
In both cases i have tested with same database with same amount of data. and also both server have same configuration like RAM, Processor,.. etc.
While executing my stored procedure in Linux server CPU usages goes to 100%.
Execution Plan for Windows:
Execution Plan for Linux:
Let me know if you have any solution for the same.
It also might be because of JIT coming into play in the Linux server and not on windows. Check if the query execution plan on the linux server includes information about JIT.
If yes, check if that's the same in windows version. If not, than I suspect that is the case.
JIT might be adding more overhead, hence try changing the jit parameters like jit_above_cost, jit_inline_above_cost to appropriate values as per your system requirements or disable those completely by setting
jit=off
or
jit_above_cost = -1
The culprit seems to be on
billservice.posid = pos.posid
More specificly its doing a Sequence Scan on pos table. It should be doing Index scan.
Check if you have indexes on these two fileds in the database.
I have a .NET application on a Windows machine and a Cassandra database on a Linux (CentOS) server. The Windows machine might be with a couple of seconds in the past sometimes and when that thing happens, the deletes or updates queries does not take effect.
Does Cassandra require all servers to have the same time? Does the Cassandra driver send my query with timestamp? (I just write simple delete or update query, with not timestamp or TTL).
Update: I use the Datastax C# driver
Cassandra operates on the "last write wins" principle. The system time is therefore critically important in determining which writes succeed. Google for "cassandra time synchronization" to find results like this and this which explain why time synchronization is important and suggests a method to solve the problem utilizing an internal NTP pool. The articles specifically refer to AWS architecture, but the principals apply to any cassandra installation.
The client timestamps are used to order mutation operations relative to each other.
The DataStax C# driver uses a generator to create them, it's important for all the hosts running the client drivers (origin of the execution request) to have clocks in sync with each other.
So I have a postgres database that I have installed an audit table - source https://wiki.postgresql.org/wiki/Audit_trigger_91plus
Now my question is as follows:
I have been wanting to create a sort of stream that notifies me of any changes that have been made by any application that has access to my DB. Now, I know that I can create a trigger and a pub/sub via pg but that will take up performance time and that is something that can become significant as the DB scales.
So instead of slowing the actual DB I was wondering if I were to do the same NOTIFY/LISTEN functionality I would've on the main tables but instead install it on the audit tables.
Has anyone ever done this? If so what have you experienced, pros? cons?. Or if anyone knows why I should or should not do this can you please let me know.
Thanks
Via NOTIFY/LISTEN, the PRO-s:
Light communications with the server, no need to pull for the data changes.
Via NOTIFY/LISTEN, the CON-s:
The practice shows that it is not sufficient just to set it up and listen for the events, because every so often the channel goes down, due to various communication problems. For a serious system you would need to put in place an additional monitoring service that can verify that your listeners are still operating, and if not - destroy the existing ones and create new ones. This can be tricky, and you probably won't find a good example of doing it.
Via scheduled data pulls, PRO-s:
Simplicity - you just check for the data change according to the schedule;
Reliability - there is nothing to break there, once the pull implementation is working.
Via scheduled data pulls, CON-s:
Additional traffic for the server, depending on how quickly you need to see the data change, and how would that interfere (if at all) with other requests to the server.
We have a couple of production couchdb databases that have blown out to 30GB and need to be compacted. These are used by a 24/7 operations website and are replicated with another server using continuous replication.
From tests I've done it'll take about 3 mins to compact these databases.
Is it safe to compact one side of the replication while the production site and replication are still running?
Yes, this is perfectly safe.
Compaction works by constructing the new compacted state in memory, then writing that new state to a new database file and updating pointers. This is because CouchDB has a very firm rule that the internals of the database file never gets updated, only appended to with an fsync. This is why you can rudely kill CouchDB's processes and it doesn't have to recover or rebuild the database like you would in other solutions.
This means that you need extra disk space available to re-write the file. So, trying to compact a CouchDB database to prevent full disk warnings is usually a non-starter.
Also, replication uses the internal representation of sequence trees (b+trees). The replicator is not streaming the entire database file from disk onto the network pipe.
Lastly, there will of course be an increase in system resource utilization. However, your tests should have shown you roughly how much this costs on your system vs an idle CouchDB, which you can use to determine how closely you're pushing your system to the breaking point.
I have been working with CouchDB since a while; replicating databases and writing Views to fetch data.
I have seen its replication behavior and observed this, which can answer your question:
In the replication process previous revisions of the documents are not replicated to the destination, only current revision is replicated.
Compacting database only removes the previous revisions. So it will not cause any problem.
Compaction will be done on the database on which you are currently logged in. So it should not affect its replica which is continuously listening for changes in it. Because it listens for the current revision changes not the previous revisions. To verify it you can see this:
Firing this query will show you changes of all the sequences of database. It only works on the basis of latest revision changes not the previous ones(So I think compaction will not make any harm):
curl -X GET $HOST/db/_changes
The result is simple:
{"results":[
],
"last_seq":0}
More info can be found here: CouchDB Replication Basics
This might help you to understand it. In short answer of your question is YES, It is safe to compact database in continuous replication.
HI I have my application running on my production server perfectly, I updated the application some 2 days ago and since then I experienced some performance related issue. The issue is when you click the button the query that runs against that button needs some 1 min or more to pull out the result and my application thus shows time out error but the same application runs fine here in my local.
I don't think its the issue related to query optimization since its simple select query having joins b/w 2 tables and its some 40-50 records pulled.
I am using SQL 2012 database, Is there any setting needs to be done on it?
Could be anything dude, without having all the information. E.g. recent degraded disk on DB server, fragmented indexes, inefficient joins, inefficient query...
Here are some general tips...
User Performance Monitor (PerfMon) to capture and review any long running queries
DBCC FREEPROCCACHE
http://msdn.microsoft.com/en-AU/library/ms174283.aspx
DBCC INDEXDEFRAG
http://msdn.microsoft.com/en-au/library/ms177571.aspx
or more intrusive, rebuild indexes DBCC DBREINDEX
http://msdn.microsoft.com/en-us/library/ms181671.aspx
Review the health of the database server, in particular harddrives where data and log files reside. Also, check server CPUs usages.