PostgreSQL query is taking too long to execute on LINUX server - linux

I have recently deployed PostgreSQL database to Linux server and one of the stored procedure is taking around 24 to 26 second to fetch the result. Previously i was deployed PostgreSQL database to windows server and the same stored procedure is taking around only 1 to 1.5 second.
In both cases i have tested with same database with same amount of data. and also both server have same configuration like RAM, Processor,.. etc.
While executing my stored procedure in Linux server CPU usages goes to 100%.
Execution Plan for Windows:
Execution Plan for Linux:
Let me know if you have any solution for the same.

It also might be because of JIT coming into play in the Linux server and not on windows. Check if the query execution plan on the linux server includes information about JIT.
If yes, check if that's the same in windows version. If not, than I suspect that is the case.
JIT might be adding more overhead, hence try changing the jit parameters like jit_above_cost, jit_inline_above_cost to appropriate values as per your system requirements or disable those completely by setting
jit=off
or
jit_above_cost = -1

The culprit seems to be on
billservice.posid = pos.posid
More specificly its doing a Sequence Scan on pos table. It should be doing Index scan.
Check if you have indexes on these two fileds in the database.

Related

Does Cassandra need all the servers to have the same time?

I have a .NET application on a Windows machine and a Cassandra database on a Linux (CentOS) server. The Windows machine might be with a couple of seconds in the past sometimes and when that thing happens, the deletes or updates queries does not take effect.
Does Cassandra require all servers to have the same time? Does the Cassandra driver send my query with timestamp? (I just write simple delete or update query, with not timestamp or TTL).
Update: I use the Datastax C# driver
Cassandra operates on the "last write wins" principle. The system time is therefore critically important in determining which writes succeed. Google for "cassandra time synchronization" to find results like this and this which explain why time synchronization is important and suggests a method to solve the problem utilizing an internal NTP pool. The articles specifically refer to AWS architecture, but the principals apply to any cassandra installation.
The client timestamps are used to order mutation operations relative to each other.
The DataStax C# driver uses a generator to create them, it's important for all the hosts running the client drivers (origin of the execution request) to have clocks in sync with each other.

Firebird DB - monitoring table

I have recently just started working with firebird DB v2.1 on a Linux Redhawk 5.4.11 system. I am trying to create a monitor script that gets kicked off via a cron job. However I am running into a few issues and I was hoping for some advice...
First off I have read through most of the documentation that come with the firebird DB and a lot of the documentation that is provided on their site. I have tried using the gstat tool which is supplied but that didn't seem to give me the kind of information I was looking for. I ran across README.monitoring_tables file which seemed to be exactly what I wanted to monitor. Yet this is where I started to hit a snag in my progress....
After running from logging into the db via isql, I run SELECT MON$PAGE_READS, MON$PAGE_WRITES FROM MON$IO_STATS; I was able to get some numbers which seemed okay. However upon running the command again it appeared the data was stale because the numbers were not updating. I waited 1 minute, 5 minutes, 15 minutes and all the data was the same during each. Once I logged off and back on to run the command again the data changed. It appears that only on a relog does the data refresh and yet I am not sure if even then the data is correct.
My question is now am I even doing this correct? Are these commands truly monitoring my db or are just monitoring the command itself? Also why does it take a relog to refresh the statistics? One thing I was worried about was inconsistency in my data. In other words my system was running yet when I would logon each time the read/writes were not linearly increasing. They would vary from 10k to 500 to 2k. Any advice or help would be appreciated!
When you query a monitoring table, a snapshot of the monitoring information is created so the contents of the monitoring tables are stable for the rest of the transaction. You need to commit and start a new transaction if you want fresh information. Firebird always uses a transaction (and isql implicitly starts a transaction if none was started explicitly).
This is also documented in doc/README.monitoring_tables (at least in the Firebird 2.5 version):
A snapshot is created the first time any of the monitoring tables is being selected from in the given transaction and it's preserved until the transaction ends, so multiple queries (e.g. master-detail ones) will always return the consistent view of the data. In other words, the monitoring tables always behave like a snapshot (aka consistency) transaction, even if the host transaction has been started with another isolation level. To refresh the snapshot, the current transaction should be finished and the monitoring tables should be queried in the new transaction context.
(emphasis mine)
Note that depending on your monitoring needs, you should also look at the trace functionality that was introduced in Firebird 2.5.

Would changing local OS time affect the database?

Currently our stand-alone 11g R2 Oracle database has the wrong time as the local OS server (Linux redhat) also has the wrong time (off by several minutes).
Can I just ask a sysadmin to change the OS time by several minutes; does that affect the database? Does the database need to be restarted after the local OS time changed has been changed? Does database need to be down while doing this?
Changing the operating system time won't impact the Oracle database itself and doesn't require any downtime.
Changing the operating system time may, however, impact the applications that are running in the Oracle database. You would need to talk with the owner(s) of those application(s) to determine whether there would actually be an impact. If, for example, an application depends on some DATE column indicating the order in which rows are inserted and/or modified, moving the clock back by a few minutes may cause data issues for the application where a row was modified before it was inserted or the last update isn't actually the last update.
Your best bet is probably to get an outage window, shut down Oracle, set up NTP, then restart Oracle.

sybase tempdb log segment filling

I have a Sybase ASE server that hangs every week or so, indicating tempdb log segment is full.
I have tried everything. trunc log on chkpt is enabled and it works correctly resetting used_pages about every 60 seconds or so.
The problem is, not all the pages freed are returned to free_pages. So, over time, free_pages eventually ends up at 0, while used_pages is minimal. The values I'm referring to come from the query sp_spaceused syslogs on tempdb. It's like a memory leak!
Currently when I run this command I get:
total_pages: 64000
free_pages: 29719
used_pages: 251
reserved_pages: 0
Every time I run the command, used_pages increases which is also odd.
This database is running on 64-bit Windows Server 2003. I have another similarly configured ASE server that does not have these issues. The contents of this other database are similar. This database is running on 32-bit Windows Server 2003. There's no need to move tempdb to a different device or expand its size any further because this other server operates perfectly and it is configured the same as the one that has odd behavior.
It depends on application that running on this ASE.
Try to monitor application with ASE monitoring tables.
Look at very advanced presentation http://download.sybase.com/presentation/TW2005/ASE115.pdf.

CPU usage of Oracle installed Database machine

I am using oracle 11g and i have an application which is coded in Spring framework. Once i configure the database on Sun fire 4170 installed with Linux the machine's CPU utilization is around 80-100% and, however, when i shift the same database to Sun M3000 server installed with Unix OS (supposedly more powerful machine) the application performance goes down and CPU utilization remains 90-100%. I can't figure out if its the application which is making the such utilization or its the database design.
It is added that the database is not relational; things are handled by the application.
Well you certainly can find some interesting opinions on the intertubes.
Oracle does not have a true server
architecture (others have it).
Rather than performing classic server
tasks, such as multi-threading,
caching of data pages, parallel
processing (split a query across many
devices) etc. within itself, it uses
the o/s to do all that. That means for
each user process (PL/SQL connection)
there is one unix process; 1000 users
means 1000 unix processes, all
competing for the same resources.
You might note that Oracle has had
a connection pooling architecture (multi-threaded server) since version 7 (1992).
a cache for data pages (known helpfully as the buffer cache) since forever
parallel query (splitting a query across many processes) since version 7.1 (1993)
splitting queries across multiple servers since OPS (version 6) or across distributed databases (version 5)
It's also noteworthy that even if all that was said was correct rather than incorrect it doesn't actually help you in determining root cause.
Especially noteworthy, because it uses
file system files (not raw
partitions), and the "caching" is
outside, it relies heavily on (and is
very sensitive to) the file system
cache that you have set up. likewise,
Oracle needs a massive amount of
memory for these processes.
Oracle certainly can use raw partitions again dating back to the last millenium, moreover if you wish to cache within the database - using the buffer cache that PerformanceDBA has forgotten about - and bypass the filesystem cache this feature is available on all current filesystems. Oracle also supplies it's own combined filesystem/volume manager in ASM which you can use if you wish.
Oracle is also rather well instrumented (and if you have access to dtrace so is solaris) and can certainly tell you what sessions, processes etc are using the CPU, what the time the application spends in the database is consumed by (down to individual block read times if you care) and so is very susceptible to profiling. I'd recommend that you check out Thinking Clearly about Performance available at http://www.method-r.com/downloads/cat_view/38-papers-and-articles and written by one of the top Oracle Performance experts in the world. If you have access to the Oracle Diagnostics pack then checking out first of all ADDM reports and secondly AWR reports would be profitable.
Trying to avoid a flame war here.
I should probably have separated out the "how to find out" part of my response more clearly from my responses to the comments about server architecture from PerformanceDBA. I share Stephanie's suspicions about the spring framework, but without properly scoped measurement evidence there is no point in blaming any particular attribute of the environment, that would be just particular bias. Fortunately the instrumentation built into the oracle kernel allows you to trace and then profile the slow sessions to determine exactly where the issue lies. So I would do the following:
1) enable tracing for a representative session (you can use the dbms_monitor package for that).
2) also gather an execution plan for the statement(s) involved with the gather_plan_statistics hint.
3) profile the trace file by time using an appropriate profile (tkprof,orasrp,method-r profiler)
Investigate the problem statements in contribution to response time order.
If you can't carry out the above, then you can use ADDM and/or AWR if licenced as I originally suggested or statspack if not licensed for the diagnostics pack. ADDM naturally concentrates on time consumers, I suggest if you are forced down the statspack route you do the same.
The M3000 is certainly a more powerful machine, but it is more suitable for true servers. The X4170 with hyper-threads is more suited for file servers.
I'm not so certain about that. Have any data to support that claim?
An M3000 has one SPARC64 VII processor with 4 cores (tech specs) while a X4170 has 1 or 2 Intel 5500 "Nehalem-EP" processors each with 4 cores (tech specs). I know that I would expect much more from even a single processor Nehalem-EP system, than the M3000. Obviously data will vary slightly with the workload, but I know where I'd put my money.

Resources