Would changing local OS time affect the database? - linux

Currently our stand-alone 11g R2 Oracle database has the wrong time as the local OS server (Linux redhat) also has the wrong time (off by several minutes).
Can I just ask a sysadmin to change the OS time by several minutes; does that affect the database? Does the database need to be restarted after the local OS time changed has been changed? Does database need to be down while doing this?

Changing the operating system time won't impact the Oracle database itself and doesn't require any downtime.
Changing the operating system time may, however, impact the applications that are running in the Oracle database. You would need to talk with the owner(s) of those application(s) to determine whether there would actually be an impact. If, for example, an application depends on some DATE column indicating the order in which rows are inserted and/or modified, moving the clock back by a few minutes may cause data issues for the application where a row was modified before it was inserted or the last update isn't actually the last update.

Your best bet is probably to get an outage window, shut down Oracle, set up NTP, then restart Oracle.

Related

PostgreSQL query is taking too long to execute on LINUX server

I have recently deployed PostgreSQL database to Linux server and one of the stored procedure is taking around 24 to 26 second to fetch the result. Previously i was deployed PostgreSQL database to windows server and the same stored procedure is taking around only 1 to 1.5 second.
In both cases i have tested with same database with same amount of data. and also both server have same configuration like RAM, Processor,.. etc.
While executing my stored procedure in Linux server CPU usages goes to 100%.
Execution Plan for Windows:
Execution Plan for Linux:
Let me know if you have any solution for the same.
It also might be because of JIT coming into play in the Linux server and not on windows. Check if the query execution plan on the linux server includes information about JIT.
If yes, check if that's the same in windows version. If not, than I suspect that is the case.
JIT might be adding more overhead, hence try changing the jit parameters like jit_above_cost, jit_inline_above_cost to appropriate values as per your system requirements or disable those completely by setting
jit=off
or
jit_above_cost = -1
The culprit seems to be on
billservice.posid = pos.posid
More specificly its doing a Sequence Scan on pos table. It should be doing Index scan.
Check if you have indexes on these two fileds in the database.

Does Cassandra need all the servers to have the same time?

I have a .NET application on a Windows machine and a Cassandra database on a Linux (CentOS) server. The Windows machine might be with a couple of seconds in the past sometimes and when that thing happens, the deletes or updates queries does not take effect.
Does Cassandra require all servers to have the same time? Does the Cassandra driver send my query with timestamp? (I just write simple delete or update query, with not timestamp or TTL).
Update: I use the Datastax C# driver
Cassandra operates on the "last write wins" principle. The system time is therefore critically important in determining which writes succeed. Google for "cassandra time synchronization" to find results like this and this which explain why time synchronization is important and suggests a method to solve the problem utilizing an internal NTP pool. The articles specifically refer to AWS architecture, but the principals apply to any cassandra installation.
The client timestamps are used to order mutation operations relative to each other.
The DataStax C# driver uses a generator to create them, it's important for all the hosts running the client drivers (origin of the execution request) to have clocks in sync with each other.

Windows 10 Excel 2016 - SQL Server 2008 R2 Log in suddenly slow

I have a connection between Excel 2016 and SQL Server 2008 R2 and use it to load some queries into different sheets.
Everything used to work perfect, but this morning I started getting huge delays when refreshing the queries - Excel goes to not responding, freezes, and around 30 secs later, finally, refreshes. It drives me crazy, because this happens for every single query and I am refreshing around 40 of them...
My colleagues have the same file and do not experience any delay.
I am running Windows 10, they are running both Windows 10 and Windows 7.
I did a system restore from last week (when it used to work fine) - same behavior...
Any help would be appreciated...
If you have restored your PC, the only elements unchanged are, as far as I understand, the contents of the sheets or something you cannot reset.
If your colleagues have the same files, with the same software and no problem, consider checking your hardware. The factory reset having done nothing, and the problem not coming from the files itself, only your phisical machine remains hypothetically faulty.
Also, take a look at the performances while the queries are being executed by using the Task Manager (in its Advanced Mode). This might give you a general idea of what is going on.
Try executing queries on another server/db if you can and compare the results.
After trying several things including update of drivers, system restore, new odbc setup, restarting the SQL server services and getting no results, I decided to restart the server itself and the problem is now gone...
I can only assume there was some bug with the SQL server itself.
Thanks to everyone for helping. This question is now closed.

Firebird DB - monitoring table

I have recently just started working with firebird DB v2.1 on a Linux Redhawk 5.4.11 system. I am trying to create a monitor script that gets kicked off via a cron job. However I am running into a few issues and I was hoping for some advice...
First off I have read through most of the documentation that come with the firebird DB and a lot of the documentation that is provided on their site. I have tried using the gstat tool which is supplied but that didn't seem to give me the kind of information I was looking for. I ran across README.monitoring_tables file which seemed to be exactly what I wanted to monitor. Yet this is where I started to hit a snag in my progress....
After running from logging into the db via isql, I run SELECT MON$PAGE_READS, MON$PAGE_WRITES FROM MON$IO_STATS; I was able to get some numbers which seemed okay. However upon running the command again it appeared the data was stale because the numbers were not updating. I waited 1 minute, 5 minutes, 15 minutes and all the data was the same during each. Once I logged off and back on to run the command again the data changed. It appears that only on a relog does the data refresh and yet I am not sure if even then the data is correct.
My question is now am I even doing this correct? Are these commands truly monitoring my db or are just monitoring the command itself? Also why does it take a relog to refresh the statistics? One thing I was worried about was inconsistency in my data. In other words my system was running yet when I would logon each time the read/writes were not linearly increasing. They would vary from 10k to 500 to 2k. Any advice or help would be appreciated!
When you query a monitoring table, a snapshot of the monitoring information is created so the contents of the monitoring tables are stable for the rest of the transaction. You need to commit and start a new transaction if you want fresh information. Firebird always uses a transaction (and isql implicitly starts a transaction if none was started explicitly).
This is also documented in doc/README.monitoring_tables (at least in the Firebird 2.5 version):
A snapshot is created the first time any of the monitoring tables is being selected from in the given transaction and it's preserved until the transaction ends, so multiple queries (e.g. master-detail ones) will always return the consistent view of the data. In other words, the monitoring tables always behave like a snapshot (aka consistency) transaction, even if the host transaction has been started with another isolation level. To refresh the snapshot, the current transaction should be finished and the monitoring tables should be queried in the new transaction context.
(emphasis mine)
Note that depending on your monitoring needs, you should also look at the trace functionality that was introduced in Firebird 2.5.

sybase tempdb log segment filling

I have a Sybase ASE server that hangs every week or so, indicating tempdb log segment is full.
I have tried everything. trunc log on chkpt is enabled and it works correctly resetting used_pages about every 60 seconds or so.
The problem is, not all the pages freed are returned to free_pages. So, over time, free_pages eventually ends up at 0, while used_pages is minimal. The values I'm referring to come from the query sp_spaceused syslogs on tempdb. It's like a memory leak!
Currently when I run this command I get:
total_pages: 64000
free_pages: 29719
used_pages: 251
reserved_pages: 0
Every time I run the command, used_pages increases which is also odd.
This database is running on 64-bit Windows Server 2003. I have another similarly configured ASE server that does not have these issues. The contents of this other database are similar. This database is running on 32-bit Windows Server 2003. There's no need to move tempdb to a different device or expand its size any further because this other server operates perfectly and it is configured the same as the one that has odd behavior.
It depends on application that running on this ASE.
Try to monitor application with ASE monitoring tables.
Look at very advanced presentation http://download.sybase.com/presentation/TW2005/ASE115.pdf.

Resources