Improve Dynamics CRM Solution Import Speed - dynamics-crm-2011

Our Dynamics CRM Solution is quite large, and it takes between 20 to 25 minutes to complete an import on our servers.
While testing the import process on a Dynamics CRM hosting provider, the import took significantly less time, around 8 minutes.
In an attempt to see if hardware can improve the import speed, I've setup a virtual machine with Dynamics CRM in VMware Workstation 8. The VM is on an SSD, 4 cores (from a 6 core i7-3930K), and has 12 GB RAM. It still took around 20 minutes. I tried SQL 2008 R2 SP2 and SQL 2012 with no noticeable difference.
How can I improve the import speed software-wise? Is there any information available that goes into detail on what the import process does, so we can optimize around those variables?
The solution contains 60+ entities, customizations to 40+ system entities, plugins, ribbon buttons, sitemap changes, processing steps, and a few hundred web resources. It's currently a little over 6MB.
Also, how can I know what hardware component is the biggest bottleneck for the import process? Perfmon showed the SSD idling away most of the import process, RAM was at 6.5 GB, only the processor showed relatively higher use, but not more than 30%-40%. Or, is VMware Workstation itself the bottleneck, and dedicated hardware, or ESX/Hyper-V, will improve this?

Even though it may be four cores, the import process itself is running on a single thread, so that might explain the relatively low CPU use - one core (plus a bit of another) is working really hard, which equals 30-40% total CPU. My bet is that you would see similar times (maybe a bit longer, but not much) even if you just gave the VM one core.
Your real problem here is that your solution is just too damn big. Break it into components. Test to see if you can narrow down the performance issues to certain customizations. I'd start removing all Web Resources, for example, and see how that effects the import.
Another approach would be to split it into two solutions - one that is "stable" and another that is under active development. Then you'd be importing a smaller solution more frequently.

You're guaranteed to not get any worse performance on real hardware. But since your VM isn't using everything that is has, you should be able to improve performance without going to that step just yet. Have you tired changing the IIS Settings in your VM?

Related

Access queries get very slow after about a week, acts like a cache problem

We have a strange problem with Access. In our plant we have multiple welders running LabVIEW software. Before each weld the operators scan bar codes on their badges and a tag on the part being welded.
The data from the scans goes to Access databases to verify the operator is authorized to operate the machine and to get information on the part being welded.
After a Windows update a few months ago this process slowed WAY down. One of those database queries could take 20 seconds or more.
We tried using more powerful computers and better network connections, neither helped.
One thing that we did find that helped was running Microsoft’s Office scrub tool, “Microsoft Support and Recovery Assistant”. This removes all traces of Office. We then install AccessRuntime_x86 (Access Runtime 2013) so our LabVIEW based software can work with the Access databases.
This restores the systems to normal operation for a week or so, then we need to run it again.
We’ve also found that we can just rerun the Access installer and use the “Repair” option and that helps.
The computers all have i5 processors, 8 or 16GB of RAM, most have SSDs and are running Windows 10 64 Enterprise.
Any advice or suggestions would be greatly appreciated!

OS specific build performance in Java

We are currently evaluating our next-generation company-wide developer pc-configuration and have noticed something really weird.
Our rather large monolith has - on our current configuration a build time of approx. 4.5 minutes (no test, just compile).
For our next generation configuration we upgraded several components. A moderate increase in frequency and IPC with the processor, doubling the number of CPU cores and a switch from a small SATA SSD towards a NVMe SSD rated at >3GBps. Also, the next generation configuration switches from Windows 7 to Windows 10.
When executing the first tests, we noticed an almost identical build time (4.3 Minutes), which was a lot less improvement than we expected.
During our experiments we tried at one point to run the build process from within a virtual Linux machine running on the windows host. On the old configuration (Windows7) we saw a drop in build times from 4.5 to ~3.7 Minutes, on the Windows 10 Host, we saw a decrease from 4.3 to 2.3 minutes. We have ruled out things like virus scan.
We were rather astonished with these results and have tried to find another explanation than some almost-religious and insulting statements about different operation systems.
So the question is: What could we have possibly done wrong in configuring the Windows machine such that the speed is almost half of a Linux running virtualized in the very same windows host? Especially as all the hardware advancements seem to be eaten up by the switch from windows 7 to 10.
Another question is: How can we ace the javac process use up more cores, because right now, using Hotspot JDK 8 we can see at most two cores really used by the build. I've read about sjavac but that seems a rather experimental feature only available to OpenJDK9 onward, right?
After almost a year in experimenting we came to the conclusion, that it is indeed NTFS which is the evil-doer. If you have a ntfs user-partition with a linux host, you get somewhat similar results compared to an all-windows-setup.
We did benchmarks of gradle-build, eclipse internal build, starting up wildfly and running database-centered tests on multiple devices. All our benchmarks showed consistently a speedup of at least 100% when switching from Windows to Linux (sometimes, Windows takes 3x the amount of time in real world benchmarks than Linux, some artificial benchmarks had a speedup of 60!). Especially on notebooks we experienced much less noise, as the combined processor load of a complete build is substantial less than with windows.
Our conclusion was, to switch from Windows to Linux over the course of the last year.
Regarding the parallelisation thing, we realized, it was some form of code-entanglement. Resolving this helped gradle and javac to parallelise the build a lot (also have a look into gradle-composite-builds)

In the context of Azure Websites, is a 2 "small" standard instance setup better than 1 "medium" server instance setup? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I read a lot about the importance of having at least 2 websites instances in Azure, one reason being that MS will only honour it SLA, if there is, due to being able to patch one server while having the other available.
However we current have strict budgets, and currently have 1 medium server with the bigger RAM. I have always believed that bigger server with more RAM is always better. Also 2 cores on the same machine may be quicker as well.
We have noticed the odd recycle, but it is too early to say whether this is due to MS patching.
Assume my application is a MVC3/EF5/SQL Azure app with 10 user concurrency, and processing is straigtforward, ie simple DB queries etc.
In the context of Windows Azure, assuming a budget limit, would 1 medium(2 x 1.6Ghz cores and 3.5 GB RAM) server be better than 2 small(1 x 1.6GHz Core and 1.75GB RAM) web server instances.
Thanks.
EDIT 1
I noticed this question has attracted 2 votes for being opinion based. The question is designed to attract reports from real experience in this area, which of course informs opinion. This is hugely valuable for my work, as also others.
EDIT 2
Interesting about SLA. I was concerned about when MS does an update, then one instance would disappear while this occurred. So what would happen in this case? Does Azure just clone up another instance? Also what happens in situations where one instance is working on a slower process, it might be waiting for something like a DB transaction. With 2 instance the LB would redirect to instance 2. Logically this sounds superior. It will still work with session vars as MS has implemented "sticky sessions".
I am intrigued that you recommend going with a "small" instance. 1.75GB RAM seems so tiny for a server, and 1 core at 1.6GHz. Need to do some memory monitoring here. Out of interest, how many times would the main application dlls load into RAM, is it just the once regardless of numbers of users? May be a basic question, but just wanted to check. Makes you think when one's laptop is 16GB and 8 cores (i7). However I quess there is a lot of different bloating processes going on a laptop, rather than many fewer and small processes on the server.
Unless your app is particularly memory hungry, I would go for a single small and configure the autoscale to start more servers as needed. Then just keep an eye on the stats. You can have a look at how much memory you are currently using; if it's less than what you get with a small instance you don't get any benefit from the extra RAM.
The SLA for Websites does not require two instances, that rule applies only to Cloud Services.
I have found that you can do a surprisingly large amount of work on single, small instances; I have several systems in that kind of setup which only use a few pct of capacity, even at hundreds of requests per minute. With 10 users you are unlikely to even have IIS use more than one thread, unless you have some very slow responses (I'm assuming you are not using async) so the second core will be idle.
For another example, look at Troy Hunts detailed blog about haveibeenpwned.com which runs on small instances.

TFS and SharePoint are slower after upgrading to TFS2010 & SharePoint2010 [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
We've recently upgraded (migration path) from TFS/Sharepoint to TFS2010/Sharepoint2010.
Most things went well, but there were a few issues that became immediately apparent.
TFS was noticeably slower (as pointed out by the entire dev team). Basically all "get latest", and query operations were more sluggish. Starting VS2010 SP1 is also really slow with loading all the projects (40+) on my machine. A refresh after that is not normally a problem. Even though other people may only have 3-4 projects open at the time, they too noticed the "working..." delay.
Sharepoint was definitely much slower. The "Show Portal" takes forever to load, and the basic editing is slower too.
Work items occasionally "time out" for no reason, and end up in a "connection lost" error. It's normally while creating a new work item, and a redo of the same command works fine. It happens even during bulk work item creation, but the timing is random.
The server runs on Windows 2008, 12 GB, and plenty of CPU power (QuadCore). The IIS connectionTimeout is set to 2 minutes (default), I've played with the MinBytesPerSecond which is set to 240 by default (I've set it to 42 as well, but no joy), and I understand that VS 2010 in general might be a bit slower than its 2008 counterpart, but even then. No processors are maxed out. There are lots of MSSQLSERVER info logs in the Event Viewer though (I just noticed this - not sure if this is a problem). I've also changed the defaultProxy setting in the devenv.exe file - no joy there either.
It's too late for a downgrade. ;)
Has anyone experienced similar problems after the upgrade?
I would love to hear from ya! :o)
We experienced performance issues after upgrading from TFS 2008 to 2010 but it is much better now. We have learned that the Antivirus and SQL Server configurations are critical. In a virtualized environment store performance is key too. We have about 100 TFS users in a 2 tier Server setup.
The SQL server has it's default memory setting set as follows:
1 - SQL Server max memory 2TB
2 - Analysis Services max memory 100%
With those settings, our 8GB SQL machine was unusable.
Now we have:
1 - SQL Server max memory 4GB
2 - Analysis Server Max memory 15%
Now the performance is ok but not great.
The Antivirus Exclusions have to configured too. Basically excluded all the data and temp directories.
As our TFS setup is virtualized we are in the process of adding more storage hardware to have better disk performance. That should definitely solve our performance issues.
Simon
are all components installed on one machine? Is SQL layer also installed on that machine? Is the machine virtualized?
It's always better to install SQL layer on physical hardware than installing it virtually. SharePoint 2010 requires 4 gigs of RAM. To ensure that SharePoint is usable you should size the WFE with at least 8 gigs of RAM.
Our TFS was also slow with 4 gigs so I've added another 4 gigs. With this setup the entire environment is right now really fast.
Let's summarize
SQL: physical installation w/ 12GB RAM, Quad Core (duplicated for failover)
SharePoint: virtualized w/ 8GB RAM, Quad Core
TFS: virtualized w/ 8GB RAM, Quad Core
Both SharePoint and TFS are generating heavy load on the database. I've a showcase machine running on my Elitebook as HyperV image. The image has about 12 gigs of ram and is running on an external SSD but it is a lot slower than our productive environment.
I hope my thoughts and setups are helpful.
Thorsten

Suggestions for a productive hardware setup with excellent virus protection

This question is a little opinion based, but I think it can be based in fact and I would prefer answers backed up with a link to a reputable company if possible.
The problem is at my job, we have "okay" hardware for the developers, laptops running Windows XP (I know) with dual core 2.3 Ghz processor, 2GB of memory and 60 GB hard disk #7200 rpm however, the amount of virus scan and security agents and big brother software on these make them unusable when scans are running. My company insists on running full disk virus scans every monday and "smart scans" every other day.
I appreciate the concern for viruses as much as the next guy, however it is hindering our work and we are looking for a new setup that allows the developers to work unimpeded by scans, yet provides virus protection et al that the company is looking for.
Any suggestions?
a) Try to change the scanning frequency/schedule - the machines are presumably running on-access scanning, so don't need to be doing scheduled scans.
b) If the policy is immutable; profile the machine to see what resource is being exhausted. It's probably the disk - laptops tend to have poor disks, and both AV-scheduled-scan and development/compilation tend to stress IO. So look at putting the fastest disks in the laptops - or even SSD.
Couldn't you just schedule the scans for when the computers aren't in use? This would lead to a higher power consumption, but would save you the burden of suffering through the scans.
You could also change the priority of the scanning applications so that they only use up idle CPU and IO time.
Have you looked at Avast? Avast.com I use the free home edition on all the computers around the house running Windows and have not noticed any slowdown and/or viruses since using it. They also have a professional/enterprise version that might work for you.
In addition, what about using Firefox with Adblock and NoScript for your web browsing?

Resources