SQl Server 2000 Execution: Statistics Missing - statistics

I have a sitauation in production where a procedure is taking different time in two different envionments, when I tried to run the execution plan some stastics are missing. When I clicked on those icon(which was in red color for some attention). Stsstics are missing in both server. But I am wondering after seeing a message. There was a field called number of executes which was 23 in slow server and 1 in fast server. Can someone please tell the importance of this.
Edit Fragmentation is not a problem because when I checked I found Reorganizing would only relocate 2% of pages , New server was created with merge replication. Please advice on "number of executes" in execution lan and how we can work to reduce this.
Edit: will re building of indexes make any performance improvement

SQL 2000 has had issues with statistics and some execution plans in the past, and you would have to add query hints in order to make sure the execution would happen the way you want it. For starters, make sure you are on SP4, and then apply the following patch:
http://support.microsoft.com/kb/936232
This patch, while states an issue with an illegal operation (it resolves crashing with 64 bit machines and SQL2000), it also resolves a few other execution plan issues. Though I would ultimately recommend upgrading to SQL 2008, which has seemed to resolve a number of statistic issues that we used to encounter.
Here is a link that explains in more detail the number of executes:
http://www.qdpma.com/CBO/ExecutionPlanCostModel.html

Related

SB37 abend in production and can not change the space parameter

My colleague faced an issue, where his sort job failed with an SB37 abend, I know that this error can be rectified by allocating more space to the output file but my question here is:
How can I remediate an SB37 abend without changing space allocation?
It takes a week or more to move changes to production. As such, I can't change the space allocation of file at the moment as the error is in production.
An SB37 abend indicates an out of space condition during end-of-volume processing.
B37 Explanation The error was detected by the end-of-volume
routine. This system completion code is accompanied by message
IEC030I. Refer to the explanation of message IEC030I for complete
information about the task that was ended and for an explanation of
the return code (rc in the message text) in register 15.
This is accompanied with message IEC030I which will provide more information about the issue.
Depending on a few items your production control team may be able to fix the environment where it would allow the job to run. Lacking more detail it is impossible to provide an exact answer so consider this a roadmap on how to approach the problem.
IEC030I B37-rc,mod, jjj,sss,ddname[-#],
dev,ser,diagcode,dsname(member)
In the message there should be a volser that identifes the volume that was being written to. If you have the production control team look at the contents of that volume there may be insufficient space that can be remedied by removing datasets. There are too many options to enumerate without specifics about the failure, type of dataset and other information to guide you.
However, as indicated in other comments, if you have a production control team that can run the job, they should be able to make changes to the JCL to direct the output dataset to another set of volumes or storage groups.
Changes to the JCL are likely the only way to correct the problem.

about managing file system space

Space Issues in a filesystem on Linux
Lets call it FILESYSTEM1
Normally, space in FILESYSTEM1 is only about 40-50% used
and clients run some reports or run some queries and these reports produce massive files about 4-5GB in size and this instantly fills up FILESYSTEM1.
We have some cleanup scripts in place but they never catch this because it happens in a matter of minutes and the cleanup scripts usually clean data that is more than 5-7 days old.
Another set of scripts are also in place and these report when free space in a filesystem is less than a certain threshold
we thought of possible solutions to detect and act on this proactively.
Increase the FILESYSTEM1 file system to double its size.
set the threshold in the Alert Scripts for this filesystem to alert when 50% full.
This will hopefully give us enough time to catch this and act before the client reports issues due to FILESYSTEM1 being full.
Even though this solution works, does not seem to be the best way to deal with the situation.
Any suggestions / comments / solutions are welcome.
thanks
It sounds like what you've found is that simple threshold-based monitoring doesn't work well for the usage patterns you're dealing with. I'd suggest something that pairs high-frequency sampling (say, once a minute) with a monitoring tool that can do some kind of regression on your data to predict when space will run out.
In addition to knowing when you've already run out of space, you also need to know whether you're about to run out of space. Several tools can do this, or you can write your own. One existing tool is Zabbix, which has predictive trigger functions that can be used to alert when file system usage seems likely to cross a threshold within a certain period of time. This may be useful in reacting to rapid changes that, left unchecked, would fill the file system.

Performance drop due to NotesDocument.closeMIMEEntities()

After moving my XPages application from one Domino server to another (both version 9.0.1 FP4 and with similar hardware), the application's performance strongly dropped. Benchmarks revealed that the execution of
doc.closeMIMEEntities(false,"body")
which takes ~0.1ms on the old server, now on average takes >10ms on the new one. This difference wouldn't matter if it was only about a few documents, but when initializing the application I read more than 1000 documents and so the initialization time changes from less than 1sec to more than 10sec.
In the code, I use the line above to close the MIME entity without saving any changes after reading from it (NO writing). The function always returns true on both servers. Still it now takes more than 100x longer despite nothing has been changed in the entity.
The facts that both server computers have more or less the same hardware, and the replicas of my application contain the same design and data on both servers, let me believe that the problem has something to do with the settings of the Domino server.
Can anybody help me with this?
PS: I always use session.setConvertMime(false) before opening the NotesDocument, i.e. the conversion from MIME to RichText should not be what causes the problem.
PPS: The HTTPJVMMaxHeapSize is the same on both servers (1024M) and there are multiple 100Mb of free memory. I just mention this in case someone thinks the problem might be related to being out of memory.
The problem is related to the "ImportConvertHeaders bug" in Domino 9.0.1 FP4. It has already been solved with Interim Fix 1 (as pointed out by #KnutHerrmann here).
It turned out that the old Domino server had Interim Fix 1 installed, while the "new" one had not. After applying the fix to the new Domino server the performance is back to normal and everything works as expected.

Netlogo 5.1 (and 5.05) Behavior Space Memory Leak

I have posted on this before, but thought I had tracked it down to the NW extension, however, memory leakage still occurs in the latest version. I found this thread, which discusses a similar issues, but attributes it to Behavior Space:
http://netlogo-users.18673.x6.nabble.com/Behaviorspace-Memory-Leak-td5003468.html
I have found the same symptoms. My model starts out at around 650mb, but over each run the private working set memory rises, to the point where it hits the 1024 limit. I have sufficient memory to raise this, but in reality it will only delay the onset. I am using the table output, as based on previous discussions this helps, and it does, but it only slows the rate of increase. However, eventually the memory usage rises to a point where the PC starts to struggle. I am clearing all data between runs so there should be no hangover. I noticed in the highlighted thread that they were going to run headless. I will try this, but I wondered if anyone else had noticed the issue? My other option is to break the BehSpc simulation into a few batches so the issues never arises, bit i would be nice to let the model run and walk away as it takes around 2 hours to go through.
Some possible next steps:
1) Isolate the exact conditions under which the problem does or not occur. Can you make it happen without involving the nw extension, or not? Does it still happen if you remove some of the code from your model? What if you keep removing code — when does the problem go away? What is the smallest code that still causes the problem? Almost any bug can be demonstrated with only a small amount of code — and finding that smallest demonstration is exactly what is needed in order to track down the cause and fix it.
2) Use standard memory profiling tools for the JVM to see what kind of objects are using the memory. This might provide some clues to possible causes.
In general, we are not receiving other bug reports from users along these lines. It's routine, and has been for many years now, for people to use BehaviorSpace (both headless and not) and do experiments that last for hours or even for days. So whatever it is you're experiencing almost certainly has a more specific cause -- mostly likely, in the nw extension -- that could be isolated.

ArangoDB: Is it bad if data or index does not fit in RAM anymore?

Dear ArangoDB community,
I am wondering if it is bad when we use ArangoDB, and if unfortunately the data + index is grown too large and does not fit anymore in the RAM. What's happening then? Does it ruin the system performance horribly?
For TokuMX, which is a very fascinating Fork of MongoDB (TokuMX offers ACID Transactions which I need), they say thats NO problem at all! Even TokuMX clearly says on their internet site that its no big deal for TokuMX if data + index does not fit in RAM.
Also, for MongoDB respective for TokuMX we can limit the RAM usage by some commands.
For my web project I would like to decide which database I am going to use, I dont want to change later. The RAM of my database server is not more than 500MB right now, and it is used concurrenctly by the NodeJS server. So both sit on one server.
On that server I have 1 Nodejs Server and 2 Database instances running. Hereby I compare TokuMX and ArangoDb with the TOP command in linux to check RAM usage. Both databeses just have a tiny collection for testing. And by the TOP command in Linux is says to me: ArangoDB: RES: 128 MByte in use, and for TokuMX it says only 9 MB (!!) Res means: actual, really used physical RAM I found out. So, the difference is already huge... thanks and many greetings. Also the virtual ram usage is enormously different. 5 Gb for arangodb. and just 300 MB for tokumx.
ArangoDB is a database plus an API sever. It uses V8 toifif extend the functionality of the database and define new APIs. In its default configuration on my Mac, it is using 3 scheduler threads and 5 V8 threads. This gives a resident size 81M. Setting the scheduler to 1 (which is normally enough) reduces this to 80M. Setting the V8 instances to two
./bin/arangod -c etc/relative/arangod.conf testbase3 --scheduler.threads 1 --server.threads 2
reduces this to 34M, setting it to 1 reduces it to 24M. Therefore presumably the database itself uses less than 14M.
Any mostly memory database (be it mongodb or arangodb or redis) will have a decreased performance, if the data and index grow too large. It might be OK, if your index still fits into memory and your working set of data is small enough. But if your working set becomes too large, your server will begin to swap and performance will decrease. This problem will exists in any type of mostly memory database like ArangoDB and MongoDB.
One additional question: Is it possible to use two different databases on one ArangoDB instance for you? It would be more memory efficient, to start just one server with two databases in it.
hello and thanks for your answer. Well, I will use one ArangoDB instance on my back server with Node instance only, as you recommended. No mongodb there anymore. I am sure I will have some questions in the future again. So far, I have to say, Arangosh is not that easy to hendle like mongo-shell. Its shell commands are a little bit cryptic to me. but the ACID transactions and the ability to run javascript on server side for production is a very big plus and really cool! And actually thats the reason why I use ArangoDB. Right now, from my Nodejs server I start the ACID transactions, which then sends the action and parameters to ArangoDb. And to minimize the action block I created a javascript module and I put it on ArangoDb in the directory you told me last time, and thats great. That little selfprogrammed javascript module does all the ACID transactions. But, do you agree, I probably could increase performance even more by programming an FOXX app for doing ACID transactions? Because right now I think everytime I call that selfmade module it needs to get loaded first bevor it can perform the transactions. And the foxx applications stays forever in RAM, and does not get reloaded for every hit. Do you agree?

Resources