For eaach BizTalk application we have a setup.bat, which creates the BizTalk application, creates file drops, build code, gacs, registers resources, creates ports -using vcscripts- and applies bindings. We also have a cleanup.bat which performs the opposite of setup.bat
These scripts are then run via nant, and finally used by cruisecontrol.net. These scripts allow us to setup a BizTalk app on a machine with BizTalk and the latest source and tools downloaded.
What do others do to "bootstrap" BizTalk applications in a repeatable and automated manner ?
I've seen BizTalk nant tasks, are they faster than vbscript ?
The setup. bat runs slower on our BizTalk build machine by a factor of about 3 ! Disk, CPU, Memory, paging are all comfortable. A full build/deploy is taking 2 hours before any tests have run - have about 20 BizTalk apps and assorted C# services, custom components. Aside from a new machine, or rebuild - our build machine has 4 gig ram, dual hyper threaded cores and about 5 years old server- Any ideas ? What are you build machines like.
Michael Stephenson has written some great blogs on automated BizTalk builds, take a look at link text
We have used a utility which Mike posted to codeplex which will create an MS build script for a BizTalk application - this has worked very well for us. You can find this at link text
We use NAnt as well for our BizTalk deployment. Specifically, we use a combination of calling the BizTalk related NAntContrib tasks (which all begin with bts) and using the <exec> task to call the command line btstask.exe directly.
At some level, they are all using the same underlying technology to talk to the BizTalk server, so it's hard to say whether NAnt is faster than something like VB.
I will say that in my experience BizTalk appears to be a resource hog. Since it's hard to change that, the only thing we do have control over is the amount of resources we give it. Therefore, if builds are taking too long, and one has the time/money to do so, throw bigger and badder hardware at it. This is generally the cheapest way as the amount of time us developers put into making sub-marginal improvements to build times can end up costing way more than hardware. For example, we've noticed that moving to 8GB of memory can make all the difference, literally transforming the entire experience.
I just create an MSI through the BizTalk Administrator. I keep my binding information separate from the MSI, so developers need to bind ports by importing the binding files, but that is easy.
In cases where assemblies need to be deployed into the gac, I use a batch file that runs gacutil, then install the MSI and finally bind the ports.
This approach is easy to maintain and, more importantly, easy for others to understand and troubleshoot.
In regards to BizTalk being a resource hog, first look at SQL Server and make sure that you limit it to some reasonable amount of memory (it takes whatever it can by default - which is usually most available memory). That one change alone makes a significant difference.
You should also consider using only minimal software during development - that means disabling the anti-virus or excluding directories from getting uselessly scanned when developers compile and deploy. Avoid using MS Word, Messenger, etc on systems that have little RAM (2Gb or less) while developing a BizTalk solution.
On developers' workstations, enable the BizTalk messagebox archive and purge job as explained here:
http://msdn.microsoft.com/en-us/library/aa560754.aspx
Keeping the database small saves valuable disk space which can help to improve overall performance.
There are quite a few solutions out there -
Rob Bowman mentioned Michael Stephenson's msbuild generator
Also on codeplex you can find another framework by Scott Colestock,Thomas Abraham and Tim Rayburn
There's also a minor addition by me palying with Oslo, but that's not half as mature as these two, but it does use the SDC tasks, which is a great starting point if you wish to create your own msbuild based solution.
Related
We have a strange problem with Access. In our plant we have multiple welders running LabVIEW software. Before each weld the operators scan bar codes on their badges and a tag on the part being welded.
The data from the scans goes to Access databases to verify the operator is authorized to operate the machine and to get information on the part being welded.
After a Windows update a few months ago this process slowed WAY down. One of those database queries could take 20 seconds or more.
We tried using more powerful computers and better network connections, neither helped.
One thing that we did find that helped was running Microsoft’s Office scrub tool, “Microsoft Support and Recovery Assistant”. This removes all traces of Office. We then install AccessRuntime_x86 (Access Runtime 2013) so our LabVIEW based software can work with the Access databases.
This restores the systems to normal operation for a week or so, then we need to run it again.
We’ve also found that we can just rerun the Access installer and use the “Repair” option and that helps.
The computers all have i5 processors, 8 or 16GB of RAM, most have SSDs and are running Windows 10 64 Enterprise.
Any advice or suggestions would be greatly appreciated!
I'm trying to run some load tests for a web application hosted on IIS using Visual Studio 2012.
Testing for several hundred users works fine (but isn't too helpful).
When I try raising the number to 1000+, the test fail;
But not because the website can't handle the load --- it's because my computer won't handle it!
Is there any way to test for large amounts of users, without crashing my own computer?
Distributed load tests are a better bet in several ways. In addition to not overwhelming your computer, they better simulate traffic from different locations, whether around the country or around the world. You get a better picture of how actual traffic can affect your site response.
I haven't done load testing from Visual Studio, but it should be possible to hit your site (assuming it's on one or more servers, and not also running locally) from multiple computers running VS. Alternately, you might want to look into load testing services, such as from SOASTA, that run load tests from the cloud.
Some commercial load testing tools also allow for distributed load testing. Disclaimer: I work for Telerik in the Test Studio group, and we have distributed load testing.
Jacob,
It sounds like you're overloading your system. With any significant load run, regardless of the toolset, you need to get the various load components distributed out to run on different systems.
If you haven't already, you should have a look at the Working With Load Tests documentation on MSDN. See also the Considerations for Large Load Tests (especially the part about overloading Agents) as well as the Run a Load Test Using Agents pages.
Why does the package upload speed differ so much between the various upload methods.
The methods I've used to deploy are
Within VS2013: Very slow uploads
Powershell Commandlets: Again very slow
Azure Portal: Blazingly fast
I would not complain if the speed differed by a few seconds or heck a minute or two, but I'm talking 10's of minutes for 45MB Deployment Package.
Method 1 & 2 takes over 45 minutes using high-speed broadband in New Zealand.
Uploading via method 3 takes just 30 seconds.
Something is not quite right.
Our network is not in question, I was able to simulate this from other networks too.
I can confirm the issue with deployment from Visual Studio taking much longer than direct upload of the package and as far as I know this is a known and common issue since the very first version of Azure.
Nonetheless I would like to point out that upload via Azure Portal is a mere data transfer, after the package has been deployed it takes the service for some more time to become responsive, whereas after VS deployment the service is responsive immediately after the deployment is done. From this I conclude that the difference in deployment time might be due to provisioning of cloud architecture (creating, running or re-configuring of host OS VMs) for your cloud services.
From what I've seen of Azure publishing using Visual Studio vs the Azure Portal, there seems to be no significant difference except for the time it takes to upload the package. As the original poster stated, the Portal is extremely fast; whereas, Visual Studio takes forever to upload the package. I didn't realize the difference for a long time until I tried using the Portal one day just to see how that process worked and I was floored by the difference in speed.
I'd definitely recommend using the Portal vs Visual Studio. One caveat, you seem to need to use Visual Studio the first time around to create the Azure Tools certificate. Not sure if there's a way to create it without Visual Studio.
I think I've seen why Visual Studio takes so long to do the upload. If you start up Fiddler while the update is happening you'll see that Visual Studio is uploading the package in 65k blocks in sequence. As you're in NZ you get our wonderful high latency to US servers, so sending lots of smallish requests is not optimal.
If you use a dedicated storage explorer they're usually much more clever and if the file you're uploading is large it will use larger blocks and also upload them in parallel.
I am using oracle 11g and i have an application which is coded in Spring framework. Once i configure the database on Sun fire 4170 installed with Linux the machine's CPU utilization is around 80-100% and, however, when i shift the same database to Sun M3000 server installed with Unix OS (supposedly more powerful machine) the application performance goes down and CPU utilization remains 90-100%. I can't figure out if its the application which is making the such utilization or its the database design.
It is added that the database is not relational; things are handled by the application.
Well you certainly can find some interesting opinions on the intertubes.
Oracle does not have a true server
architecture (others have it).
Rather than performing classic server
tasks, such as multi-threading,
caching of data pages, parallel
processing (split a query across many
devices) etc. within itself, it uses
the o/s to do all that. That means for
each user process (PL/SQL connection)
there is one unix process; 1000 users
means 1000 unix processes, all
competing for the same resources.
You might note that Oracle has had
a connection pooling architecture (multi-threaded server) since version 7 (1992).
a cache for data pages (known helpfully as the buffer cache) since forever
parallel query (splitting a query across many processes) since version 7.1 (1993)
splitting queries across multiple servers since OPS (version 6) or across distributed databases (version 5)
It's also noteworthy that even if all that was said was correct rather than incorrect it doesn't actually help you in determining root cause.
Especially noteworthy, because it uses
file system files (not raw
partitions), and the "caching" is
outside, it relies heavily on (and is
very sensitive to) the file system
cache that you have set up. likewise,
Oracle needs a massive amount of
memory for these processes.
Oracle certainly can use raw partitions again dating back to the last millenium, moreover if you wish to cache within the database - using the buffer cache that PerformanceDBA has forgotten about - and bypass the filesystem cache this feature is available on all current filesystems. Oracle also supplies it's own combined filesystem/volume manager in ASM which you can use if you wish.
Oracle is also rather well instrumented (and if you have access to dtrace so is solaris) and can certainly tell you what sessions, processes etc are using the CPU, what the time the application spends in the database is consumed by (down to individual block read times if you care) and so is very susceptible to profiling. I'd recommend that you check out Thinking Clearly about Performance available at http://www.method-r.com/downloads/cat_view/38-papers-and-articles and written by one of the top Oracle Performance experts in the world. If you have access to the Oracle Diagnostics pack then checking out first of all ADDM reports and secondly AWR reports would be profitable.
Trying to avoid a flame war here.
I should probably have separated out the "how to find out" part of my response more clearly from my responses to the comments about server architecture from PerformanceDBA. I share Stephanie's suspicions about the spring framework, but without properly scoped measurement evidence there is no point in blaming any particular attribute of the environment, that would be just particular bias. Fortunately the instrumentation built into the oracle kernel allows you to trace and then profile the slow sessions to determine exactly where the issue lies. So I would do the following:
1) enable tracing for a representative session (you can use the dbms_monitor package for that).
2) also gather an execution plan for the statement(s) involved with the gather_plan_statistics hint.
3) profile the trace file by time using an appropriate profile (tkprof,orasrp,method-r profiler)
Investigate the problem statements in contribution to response time order.
If you can't carry out the above, then you can use ADDM and/or AWR if licenced as I originally suggested or statspack if not licensed for the diagnostics pack. ADDM naturally concentrates on time consumers, I suggest if you are forced down the statspack route you do the same.
The M3000 is certainly a more powerful machine, but it is more suitable for true servers. The X4170 with hyper-threads is more suited for file servers.
I'm not so certain about that. Have any data to support that claim?
An M3000 has one SPARC64 VII processor with 4 cores (tech specs) while a X4170 has 1 or 2 Intel 5500 "Nehalem-EP" processors each with 4 cores (tech specs). I know that I would expect much more from even a single processor Nehalem-EP system, than the M3000. Obviously data will vary slightly with the workload, but I know where I'd put my money.
I've been working with Windows Azure and Amazon Web Services EC2 for a good many months now (almost getting to the years range) and I've seen something over and over that seems troubling.
When I deploy a .NET build into Windows Azure into a web role (or service role) it takes usually 6-15 minute for it to startup. In AWS's EC2 it takes about the same to startup the image and then a minute or two to deploy the app to IIS (pending of course its setup).
However when I boot up an AWS instance with SUSE Linux & Mono to run .NET, I get one of these booted and deploy code to it in about 2-3 minutes (again, pending it is setup).
What is going on with Windows OS images that cause them to take soooo long to boot up in the cloud? I don't want FUD, I'm curious about the specific details of what goes on that causes this. Any specific technical information regarding this would be greatly appreciated! Thanks.
As announced at PDC, Azure will soon start to offer full IIS on Azure web roles. Somewhere in the keynote demo by Don Box, he showed that this allows you to use the standard "publish" options in Visual Studio to deploy to the cloud very quickly.
If I recall correctly, part of what happens when starting a new Azure role is configuring the network components, and I remember some speaker at a conference mentioning once that that was very time consuming. This might explain why adding additional instances to an already running role is usually faster (but not always: I have seen this take much more than 15 minutes as well on ocassion).
Edit: also see this PDC session.
I don't think the EC2 behavior is specific to the cloud. Just compare boot times of Windows and Linux on a local system - in my experience, Linux just boots faster. Typically, this is because the number of services/demons launched is smaller, as is the number of disk accesses that each of them needs to make during startup.
As for Azure launch times: it's difficult to tell, and not comparable to machine boots (IMO). Nobody knows what Azure does when launching an application. It might be that they need to assemble the VM image first, or that a lot of logging/reporting happens that slows down things.
Don't forget, there is a Fabric controller that needs to check for fault zones and deploy your VMs across multiple fault zones (to give you high availability, at least when there are more than two instances). I can't say for sure, but that logic itself might take some extra time. This might also explain why network setup could be a little complicated.
This will of course explain the difference (if any) between boot times in the cloud and boot times for windows locally or in Amazon. Any difference in operating systems is completely dependent on the way the OS is built!