I have a Sybase ASE server that hangs every week or so, indicating tempdb log segment is full.
I have tried everything. trunc log on chkpt is enabled and it works correctly resetting used_pages about every 60 seconds or so.
The problem is, not all the pages freed are returned to free_pages. So, over time, free_pages eventually ends up at 0, while used_pages is minimal. The values I'm referring to come from the query sp_spaceused syslogs on tempdb. It's like a memory leak!
Currently when I run this command I get:
total_pages: 64000
free_pages: 29719
used_pages: 251
reserved_pages: 0
Every time I run the command, used_pages increases which is also odd.
This database is running on 64-bit Windows Server 2003. I have another similarly configured ASE server that does not have these issues. The contents of this other database are similar. This database is running on 32-bit Windows Server 2003. There's no need to move tempdb to a different device or expand its size any further because this other server operates perfectly and it is configured the same as the one that has odd behavior.
It depends on application that running on this ASE.
Try to monitor application with ASE monitoring tables.
Look at very advanced presentation http://download.sybase.com/presentation/TW2005/ASE115.pdf.
Related
I have code that’s been running on several thousand customer PCs for about 6 years that uses ODBC to SQL INSERT a blob onto MariaDB databases. All well and good.
Since moving MariaDB servers to Azure (managed "Azure database for MariaDB server") a few months ago I sometimes see the SQL INSERT generate an ODBC error message “Server has gone away” or “lost connection” depending upon the MySQL ODBC connector version ( 5.3 and 8.0 respectively).
Customers see this more rarely but I see it here often because I tend to upload larger blobs and, possibly, because I’m the other side of the Atlantic and hitting some network level timeout. But I need to get to the bottom of it as it does occasionally happen to customers - about once a month and with blobs as small as 1mb.
as per other posts hereabouts i have increased the following parameters but it still fails :
explicit_defaults_for_timestamp TRUE
connect_timeout 240
net_write_timeout 2400
wait_timeout 240
max_allowed_packet 1024M
interactive_timeout 1800
net_read_timeout 3000
delayed_insert_timeout 3000
MariaDB v10.3.23
One reason I think this is an Azure/network issue rather than ODBC/MariaDB is because, since the switch to Azure, I am experiencing a similar issue copying large files from the clipboard to any of our VMs; copying a 35mb file fails 4 out of every 5 attempts at varying points in the copy. The RDP connection errors and reconnects. I have other ways of uploading the files in question so that’s not the problem… but for the time being I’m assuming it’s the same underlying Azure/network issue as the ODBC error.
I’ve checked with my ISP and they can see no network issues my end.
I have recently deployed PostgreSQL database to Linux server and one of the stored procedure is taking around 24 to 26 second to fetch the result. Previously i was deployed PostgreSQL database to windows server and the same stored procedure is taking around only 1 to 1.5 second.
In both cases i have tested with same database with same amount of data. and also both server have same configuration like RAM, Processor,.. etc.
While executing my stored procedure in Linux server CPU usages goes to 100%.
Execution Plan for Windows:
Execution Plan for Linux:
Let me know if you have any solution for the same.
It also might be because of JIT coming into play in the Linux server and not on windows. Check if the query execution plan on the linux server includes information about JIT.
If yes, check if that's the same in windows version. If not, than I suspect that is the case.
JIT might be adding more overhead, hence try changing the jit parameters like jit_above_cost, jit_inline_above_cost to appropriate values as per your system requirements or disable those completely by setting
jit=off
or
jit_above_cost = -1
The culprit seems to be on
billservice.posid = pos.posid
More specificly its doing a Sequence Scan on pos table. It should be doing Index scan.
Check if you have indexes on these two fileds in the database.
I have a requirement to write a load test measuring message transmission latencies. In order to simulate a large number of simultaneous uses without running into thread contention problem on one box, I'm spinning up multiple servers in Azure.
When I got my first results back, I was a little shocked to see that the results indicated the message was received before it was sent. I immediately realized that, while I had an implicit assumption that all the VMs would have their clocks synced to within milliseconds, that was clearly not the case.
I've spent several hours googling ways to resolve this, and I'm not getting anywhere. One thought was to have each VM query the time on a central server using NetRemoteTOD() using a technique similar to this NetRemoteTOD, and then establish a per-machine correction factor to be added to the time measured from the local machine's clock. However when I tried to run that method, I got a error 2184, "The service has not been started" I have verified that both the RPC service and the Windows Time service are running on the both the client and target machines, and I have not been successful in finding any information indicating what other service needs to be running (or even if the error really means what it seems to mean). (I also get the same error when running between my development desktop and a server on our corporate network. However, I can run it successfully to a PDC on the corporate network - but I can't find a PDC on Azure, since neither machine is part of a domain.)
So, does any one have either any information on what service needs to be started to get NetRemoteTOD (or the windows NET TIME command, which relies on NetRemoteTOD under the covers) working. Alternatively, does anyone have a suggestion for some other technique to get a consistent time reference across multiple VMs in Azure? (Note, I don't necessarily need their clocks synced, I just need a way to establish a consistent correction factor to reference the times to a common source. Note also, I need sub-second accuracy - probably about 100 msec will do.) Basically, I just need a windows function or shell command that will get me the time to sub-second accuracy on a given remote server.
Thanks in advance.
PS. Azure servers are running Server 2008 R2 SP1
Currently our stand-alone 11g R2 Oracle database has the wrong time as the local OS server (Linux redhat) also has the wrong time (off by several minutes).
Can I just ask a sysadmin to change the OS time by several minutes; does that affect the database? Does the database need to be restarted after the local OS time changed has been changed? Does database need to be down while doing this?
Changing the operating system time won't impact the Oracle database itself and doesn't require any downtime.
Changing the operating system time may, however, impact the applications that are running in the Oracle database. You would need to talk with the owner(s) of those application(s) to determine whether there would actually be an impact. If, for example, an application depends on some DATE column indicating the order in which rows are inserted and/or modified, moving the clock back by a few minutes may cause data issues for the application where a row was modified before it was inserted or the last update isn't actually the last update.
Your best bet is probably to get an outage window, shut down Oracle, set up NTP, then restart Oracle.
I am working in a MOSS 2007 project and have customized many parts of it. There is a problem in the production server where it takes a very long time (more than 15 minutes, sometimes fails due to timeouts) to create a sub site (even with the built-in site templates). While in the development server, it only takes 1 to 2 minutes.
Both servers are having same configuration with 8 cores CPU and 8 GIGs RAM. Both are using separate database servers with the same configuration. The content db size is around 100 GB. More than a hundred subsites are there.
What could be the reason why in the other server it will take so much time? Is there any configuration or something else I need to take care?
Update:
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Update 2:
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
What should I check next?
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Thanks for the inputs everyone, I really appreciate.
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
Any idea what should I check next? Thanks a lot.
Yes, its normal OOB if you haven't turned the Second Stage Recycle bin off or set a site quota. If a site quota has not been set then the growth of the Second Stage Recycle bin is not limited...
the second stage recycle bin is by default limited to 50% size of the site quota, in other words if you have a site quota of 100gb then you would have a Second Stage recycle bin of 50gb. If a site quota has not been set, there are not any growth limitations...
I second everything Nat has said and emphasize splitting the content database. There are instructions on how to this provided you have multiple site collections and not a single massive one.
Also check your SharePoint databases are in good shape. Have you tried DBCC CHECKDB? Do you have SQL Server maintenance plans configured to reindex and reduce fragmentation? Read these resources on TechNet (particularly the database maintenance article) for details.
Finally, see if there is anything more you can do to isolate the SQL Server as the problem. Are there any other applications with databases on the same SQL Server and are they having problems? Are you running performance monitoring on the SQL Server or SharePoint servers that show any bottlenecks?
Backup the production database to dev and attach it to your dev SharePoint server.
Try and create a site. If it does not take forever to create a site, you can assume there is a problem with the Prod database.
Despite that, at 100gig, you are running up to the limit for a content database and should be planning to put content into more than one. you will know why when you try and backup the database. Searching should also be starting to take a good long time now.
So long term you are going to have to plan on splitting your websites out into different content databases.
--Responses--
Yeah, database size is all just about SQL server handling it. 100GB is just the "any more than this and it starts to be a pain" rule of thumb. Full Search crawls will also start a while.
Given that you do not have access to the production database and that creating a sub-site is primarily a database operation, there is nothing you can really do to figure out what the issue is.
You could try creating a subsite while doing a trace of the Dev database and look at the tables those commands reference to see if there is a smoking gun, but without production access you are really hampered.
Does the production system server pages and documents at a reasonable speed?
See if you can start getting some stats from the database during the creation, find out what work is being done. SQL has some great tools for that now.