We have a massive spreadsheet which does a lot of calculations and not much drawing / writing to spreadsheets
My question is : Does monitoring the spreadsheet whilst it is running via RDP actually make this slower??
Put differently if rdp was disconnected would this result in improved speed??
I've actually done a lot of work from home via Remote Desktop that involved an Excel Workbook (and Access Applications) doing lots of hefty calculations. From my experience, I didn't notice any slowdown in the calculations on the Excel sheet, but occasionally the connection would slow and anything that refreshed the screen a lot would make the PC difficult to use.
The most important thing, however, is to write code that modifies the visual elements of the screen as much as possible. For example, instead of looping through a bunch of cells and setting each one as the active cell to find its value, loop through a set of range values that don't require the sheet to refresh. This, by far, has created the biggest performance boost in my VBA code.
If your code is already fairly optimized, you'll probably not see any difference monitoring it over RDP. However, if monitoring is your issue, you ought to consider outputing data to a separate Excel or Text file that might be stored on a shared server. If done correctly, I imagine that would have a smaller impact on your CPU than RDP. THis will still allow you to monitor the progress of the Excel application without having to log in.
Just look at the CPU usage of Excel and the RDP server. If Excel isn't getting its 100% while calculating, or if the RDP server seems to be using too much... then yes, RDP is making things slower.
Related
I have big excel files, thousands lines and rows.
Gigs in size.
When i do work inside this files in excel, it throttling and lagging. Sometimes just get stucked ans freeze.
When i open a task manager, i see that Excel didnt eat even a one CPU.
RAM usage also not overloaded.
How to make excel use all my cores?
Excel 2019.
I see you are familiar with Python. Why don't you move your databases to Python? Actually I don't know capability of it. I work in R. Some of my data in Excel and even in *.csv take 100-200 Mb, while in *.Rdata format the same information is less then 5Mb. R functions work faster then heavy Excel.
And yes, how to make excel use all cores of processor is interesting to me too.
I'm using excel vba to pull data from a MS Access DB - this is using Excel 2013 and Access 2013 32bit. The code historically has used:
Provider=Microsoft.Jet.OLEDB.4.0;
However some computers have upgraded to Excel 2016 64bit and the Jet provider is not available for 64bit. I have changed the code to:
Provider=Microsoft.ACE.OLEDB.12.0;
which works for both 64bit and 32bit systems. However, I have noticed a significant speed drop in loading/saving data just from changing this line. Does anyone know why this can be and how I can improve it?
You are correct in having to choose the ACE provider for x64 bits.
And the big advantage of JET was it was (and still is) installed on all copies of windows by default. So no need to install Access or the runtime, or previous the office connectivity package.
As for performance? There has been a few comments about performance in regards to ACE x64.
However, one trick or suggestion is to ensure that the connection stays open. In other words, are you sure the row processing is going slow, or it is the overall time?
(perhaps put a test msg box, or test in your code.
Eg:
Dim T as single
T = timer()
‘ your code here
Debug.print timer() – t
The above will thus spit out the time to the debug window (while in VBA ide hit ctrl-g to display the immediate/debug window.
The reason why I suggest force open idea is often you find that ACE takes a VERY long time to open. But once open then the data reading has good performance (same as before).
So, I suggest to check and try this fix.
So open a table (any table) and KEEP it open. Now run your existing code (that may well open + close other tables). The issue is when ACE attempts to open a table, it tries to put locks on the mdb/accdb file and it is this process that takes VERY VERY long time.
However, if you force (keep) open one table, then this VERY slow process of ACE attempting to lock the file for read/write does not occur each time you execute a query, or create additional recordsets in code.
So, if the row reading speed is fast, but the time to START + open is very slow, then before you run + test your routines, force open a table to some reocrdset (keep it active and in scope), and THEN try your code.
I find 9 out of 10 times, this results in elimination of this slow speed, and often I seen the results are nothing short of spectacular (it will run faster then before!!!)
I developed a vb.net program that uses excel file to generate some reports.
Once the program takes too much time to generate a report, I usually do other things while the program is running. The problem is that sometimes I need to open other excel files and the excel files used in the program are shown to me. I want to still hide those files being processed even when I run other excel files. Is this possible? Thanks
The FileSystem.Lock Method controls access by other processes to all or part of a file opened by using the Open function.
The My feature gives you better productivity and performance in file I/O operations than Lock and Unlock. For more information, see FileSystem.
More information here.
This isn't really a programming question perse, but at work we have been forced to upgrade from Excel 2007 to Excel 2016 which has caused some productivity issues with respect to opening multiple workbooks at once.
The problem is that our entire file system has a bunch of linked formulas and iterative calculations where it is necessary to have multiple workbooks open at once (around 60+). Previously with Excel 2007 we were able to do this easily with relatively low power computers (4 gigs of ram, with a mediocre hyperthreaded i3 processor) but with the new change to Excel 2016 we keep getting an error stating to "upgrade to 64-bit Excel" or "install more physical memory". The error message can be seen here. We have already upgraded to 64-bit Excel, and on top of that, I tried using other co workers computers with 8 and 16 gigs of ram which resulted in little to no difference. Does this imply that perhaps Excel 2016 is not well optimized for having this many workbooks open at the same time? Are there solutions or workarounds this problem? I find it hard to believe that 16 gigs of ram is still insufficient when 2007 was able to go through this process easily, though perhaps the change from MDI to SDI means less efficiency?
As an aside, yes, I do indeed wish we were not using Excel for such a computationally expensive process in which Excel may not have been designed for tasks like these, but to move everything to something like SAS would take a lot of time I'd imagine and ultimately, I don't make the decisions :(
Thanks for any help.
Space Issues in a filesystem on Linux
Lets call it FILESYSTEM1
Normally, space in FILESYSTEM1 is only about 40-50% used
and clients run some reports or run some queries and these reports produce massive files about 4-5GB in size and this instantly fills up FILESYSTEM1.
We have some cleanup scripts in place but they never catch this because it happens in a matter of minutes and the cleanup scripts usually clean data that is more than 5-7 days old.
Another set of scripts are also in place and these report when free space in a filesystem is less than a certain threshold
we thought of possible solutions to detect and act on this proactively.
Increase the FILESYSTEM1 file system to double its size.
set the threshold in the Alert Scripts for this filesystem to alert when 50% full.
This will hopefully give us enough time to catch this and act before the client reports issues due to FILESYSTEM1 being full.
Even though this solution works, does not seem to be the best way to deal with the situation.
Any suggestions / comments / solutions are welcome.
thanks
It sounds like what you've found is that simple threshold-based monitoring doesn't work well for the usage patterns you're dealing with. I'd suggest something that pairs high-frequency sampling (say, once a minute) with a monitoring tool that can do some kind of regression on your data to predict when space will run out.
In addition to knowing when you've already run out of space, you also need to know whether you're about to run out of space. Several tools can do this, or you can write your own. One existing tool is Zabbix, which has predictive trigger functions that can be used to alert when file system usage seems likely to cross a threshold within a certain period of time. This may be useful in reacting to rapid changes that, left unchecked, would fill the file system.