This is a fun one.
I am new to deploying resources using ARM templates, but I have managed (with some difficulty!) to get them working to a satisfactory degree.
I have a question about best practices though and can't seem to find any articles on it.
If I have 2 projects that share a resource (in this case an SQL server), is it best practice to have that said resource in both templates or should there be some sort of shared project that uses first come first serve when building and deploying?
Currently I use the former.
The reason I bring this up, is I can imagine that doing it my way will eventually cause some issues, because if I change one template, then I have to change them all.
I usually create everything in separate templates and dont like to reuse templates because it really brings nothing to the table. I can copy\paste the same snippet into 2\3\10 different templates its no big deal, I can use mass find\replace to amend something. although, I've never had the case where some certain resource would needed to be changed in exactly the same fashion across all the existing projects. and usually projects have at least something in common (storage\vm\public ips\etc) and usually they do require different way to configure them, so I'd say dont try to reuse templates. little gain, but might hurt you in future a lot.
I would think of this in terms of "lifecycle". I.e. does the SQL Server have the same lifecycle as the databases? It sounds like it does not... which would suggest not only would it be a separate template, but you would not deploy that template each time you deploy a database.
ARM templates are code so follow best practices you use for other code (where sharing and reuse have their place).
When I am playing around and kicking tires I usually end up duplicating things like you're doing now, but once I make things "production ready" I'll clean up those things as it will eventually cause problems like you're thinking. Depends a little on your workflow, but code is code...
Related
I have two node apps running on my server, each performing different tasks.
However, I now need to create a service that is going to be used by both of them. Obviously I don't want to create it in both of the apps, hence creating two codes to maintain.
My current thought is to have a separate repository only for this service, then require it from each app as an outsourced module.
I was wondering whether there are better methods, or if this method might encounter problems I'm not seeing
Well if you strictly follow the rule that shared means only shared things in the common package, I don't see any issues with that. The problem comes when you try to put the logic in one repo which is supposed to be only used for one. In that scenario you will need to rebuilt both apps as the repo or package is depedency of both.
One issue that I have seen people face is when they work with shared repo is that when you need to tweak things just because they are at common place. for example you have a method that does one job and suddenly you want to use that in other place as well but with a tweak. In that case you end up modifying the shared code to support the second repo but since it is shared, you will have to do regression testing of both apps.
I see shared repo candidates being drivers, client etc. I guess rest is up to your project structure and judgement. In this case there is nothing correct or incorrect. Hope this is making sense.
Currently we are running a C# (built on Sharepoint) project and have implemented a series of automated process to help delivery, here are the details.
Continuous Integration. Typical CI system for frequent compilation and deployment in DEV environment.
Partial Package. Every week, a list of defects accompanied fixes is identified and corresponding assemblies are fetched from the full package to form a partial package. The partial package is deployed and tested in subsequent environments.
In this pipeline, there are two packages going through are being verified. Extra effort is used to build up a new system (web site, scripts, process, etc) for partial packages. However, some factors hinder its improvement.
Build and deploy time is too long. On developers' machines, every single modification on assemblies triggers around 5 to 10 minute redeployment in IIS. In addition, it takes 15 minutes (or even more) to rebuild the whole solution. (The most painful part of this project)
Geographical difference. Every final package is delivered to another office, so manual operation is inevitable and package size is preferred to be small.
I will be really grateful to have your opinions to push the Continuous Delivery practices forward. Thanks!
I imagine the reason that this question has no answers is because its scope is too large. There are far too many variables that need to be eliminated, but I'll try to help. I'm not sure of your skill level either so my apologies in advance for the basics, but I think they'll help improve and better focus your question.
Scope your problem to as narrow as possible
"Too long" is a very subjective term. I know of some larger projects that would love to see 15 minute build times. Given your question there's no way to know if you are experiencing a configuration problem or an infrastructure problem. An example of a configuration issue would be, are your projects taking full advantage of multiple cores by being built parallel /m switch? An example of an infrastructure issue would be if you're trying to move large amounts of data over a slow line using ineffective or defective hardware. It sounds like you are seeing the same times across different machines so you may want to focus on configuration.
Break down your build into "tasks" and each task into the most concise steps possible
This will do the most to help you tune your configuration and understand what you need to better orchestrate. If you are building a solution using a CI server you are probably running using a command like msbuild.exe OurProduct.sln which is the right way to get something up and running fast so there IS some feedback. But in order to optimize, this solution will need to be broken down into independent projects. If you find one project that's causing the bulk of your time sink it may indicate other issues or may just be the core project that everything else depends on. How you handle your build job dependencies is dependent up your CI server and solution. Doing it this way will create more orchestration on your end, but give faster feedback if that's what's required since you're only building the project that had the change, not the complete solution.
I'm not sure what you mean about the "geographical difference" thing. Is this a "push" to the office or a "pull" from the offices? This is a whole other question. HOW are you getting the files there? And why would that require a manual step?
Narrow your scope and do multiple questions and you will probably get better (not to mention shorter and more concise) answers.
Best!
I'm not a C# developer, but the principles remain the same.
To speed up your builds, it will be necessary to break your application up in smaller chunks if possible. If that's not possible, then you've got bigger problems to attack right now. Remember the principles of API's, components and separation of concerns. If you're not familiar with these principles, it's definitely worth the time to learn about them.
In terms of deployment - great that you've automated it, but it sounds exactly the same as you are building a big-bang deployment. Can you think of a way to deploy only deltas to the server(s), are do you deploy a single compressed file? Break it up if possible.
this is kind of an open question: I'm trying to define for a team a set of recommended practices for delivering a SharePoint solution in stages. That means, changes will happen at every level (solution, feature, web part, content types, etc).
On your experience, which are the practices that really, really, worked for you guys? For ex. using upgrade custom actions, placing upgrade logic entirely on FeatureUprading event handlers? Or in FeatureActiving handlers and assume features could already exist? Else?
I'm asking because I know of projects that follow The Word on this from many MSDN articles and still, upgrades are kind of nightmarish to manage, and those processes are sometimes difficult to grasp for average devs.
Thanks!
As no one else has ventured an answer, my current approach is:
Use the declarative approach for those cases where it works 100% of the time e.g. NOT content types
Fall back to code for the rest
Always write the related code so that it can be run multiple times against either the pre- or post-upgrade state
Weird question, perhaps. We have a number of simple utilities written in-house that need to be run on an automated basis. These are not build jobs. Just things like running SendOutHourlyEmailAlarms.exe, KeepFoldersInSynch.exe and such. I would normally set these things up as simple scheduled tasks/AT commands (or a Windows Service if more granular control is needed over the scheduling), but a co-worker has set up a number of these tasks as build projects on the CruiseControl.NET server. I asked him why he set these up this way and his response was that the executions (and their logs, return values, thrown exceptions) were all tracked and logged and that this information was accessible through an organized interface on the build server website. I couldn't argue with this.
But this just has a smell that I can't quite identify. Is this a proper use of CruiseControl.NET? If not, what are the dangers? Even if it may fit the bill, aren't there other products better suited for this type of thing?
We have all sorts of non-build related tasks for the exact same reason as your coworker had, I want one spot to look up any and all jobs I need run.
Some Examples of our CC.NET projects:
FTP installers to Remote QA
Creating Source Code Documentation
Create VM's with the installers
installed for QA in the morning
Archiving Installers
Pretty much anything I have to do by hand more than once, becomes a project. IMHO it is much better than a scheduled task for one other reason as well. Our config files are in source control, so we have 1 place to make adjustments. We do not have to log into multiple servers and make adjustments or wonder which server did that.
I think your coworker has made a good argument. If these tasks are related to the development process, then placing them in CruesControl.Net as a project seems acceptable. I would draw the line at utilizing a development server to run production processes though. Although it is true that "If the only tool you have is a hammer, you tend to see every problem as a nail," it doesn't mean that the hammer isn't capable of solving a lot of problems!
Just because a tool is designed to solve a particular problem does not mean that it will not have equal facility at solving similar problems outside the scope originally concieved by the tool creator. If CruiseControl.NET solves these problems well, then it is absolutely the appropriate tool to use.
If your company or project places an emphasis on (or at least appreciates) the development of code and components that can be reused and shared across projects, what are the "social engineering" things you've needed to do to facilitate the re-use the code?
In my experience, code or components that are simply stated as being "reusable" won't be reused unless that code has someone to champion or evangelize it. Otherwise, people simply won't know about it.
How do you make sure shared components or reusable code work in your organization?
A couple of thoughts come to mind:
Make them very well documented and easy to figure out. That way, no one will give up on using them because its too confusing.
Make them very useful, and make sure they take care of problems that are so annoying that people would have to be crazy not to use them.
Another great tactic is to find out what code other people in the organization have in their projects, offer to extract some of that functionality (talk it up about how great it is and how you really want to use it in your project). Once their code is added to the shared module, you usually end up +1 fan of the shared library, and have an evangelist to help you sell the idea. Remember, people usually only do things if they are in their benefit - so making them look good and their code look good is strongly in their benefit.
In my organization, we had management help to foster the creation of a shared library. We also indoctrinate new hires to use it.
Then, we put good code that must be reviewed by everyone for usefulness and completeness. This is where the group buy in comes into play.
Also, we have a very robust, documented process for branching the shared lib for use in Visual Studio. Hint: we use svn:external property to manage checkouts from different repositories in the same folder structure. Potential shared code first exists in the branches before it gets promoted to the trunk.
The best way I've found to make people want to use your reusable code is to show them how to use it the way you intend. Provide a sample program using your library. Keep it in the source code repository just like any other project. Keep the sample program documented and updated when the reusable library changes.
Personally I try and demo code that I think is useful and give a comparison between using it and not using it to try and show why it's so much better. Normally this happens during weekly development meetings.
I agree that it really needs someone to evangelise and push for the adoption of new methods, one of the things I've found a lot of developers to not be that great at is selling themselves or what they've done, so it's worth working on these skills and pushing others to do the same, leading by example.
This may be a bit java centric; but publishing both binary and source in a corporate maven repository does wonders for visibility. It makes other people happy to use your code ;) We work with a lot of open source and find that ready availability of source code to read is really a key feature, especially when it can integrate directly into the IDE. We really expect that from in-house projects too! I wonder how we managed before we had maven ?? (Even ant can use a maven repo nowdays)
In a grad school class a few years ago, I did a case study on a web-based repository where programmers could deposit their code for re-use. This wasn't for my workplace, but for a lab with thousands of scientists, & engineers where there wasn't any other centralized means for sharing.