Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm currently trying to speed up our Team City builds.
After a successful msbuild step, we package up our solution using nuget.exe and then publish to a nuget feed in order to allow Octopus Deploy to do it's thing.
I'm at the stage now where the nuget package step takes ~4 minutes (we have a large amount of assets) and the nuget publish step takes ~30 seconds. This takes up about 75% of our overall build time, so any time I can shave off this would be good progress.
I was wondering if anyone has any experience with both OctoPack and nuget.exe, and are in a position to tell me if either of the two methods are quicker than the other? No hard numbers needed, just anecdotal evidence is enough.
The Octopack uses NuGet.exe so you can't really contrast them. You can view the OctoPack source code on GitHub to see how it works. I would point out that using a different version of NuGet.exe might make a difference. I have no evidence to support this but it's a shot in the dark.
Within the OctoPack, you can pass a parameter (NuGetExePath) to specify the specifc NuGet.exe you want to use. See this targets file within the Octopack for more information on parameters you can pass to OctoPack.
I haven't used nuget.exe but I do use Octopack and what I can tell you is that it packages up the three projects in our solution in 3 secs, 7 secs and 12 secs respectively.
The publish artifacts stage for each package takes between two and four seconds. Obviously if you have a large number of assets there is more to consider but hopefully that will give you a flavour of time spent per package.
Also, you can be up and running with Octopack in moments as it is just a case of installing the Octopack Nuget package in your solution and check the Run Octopack box in your build step in Team City:
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am looking for a tool to keep track of my servers, we use chef-solo everywhere but its hard to keep track. Any simple tool to keep track of servers like Spacewalk, but then simplified?
I use spacewalk for some servers I admin, but its a bit too bloated for what I want now. But how it does work great. We kickstart servers using chef-solo then register it with a static key with spacewalk. So we have a nice overview of all the servers and servers that didnt callback after X time.
We really like the concept of chef-solo and does not want chef-server for many reasons. But what is missing in our infrastructure is a simple tool, a simple web interface to keep track of the servers.
Thank you.
You discard the obvious answer to your question... chef-server :-)
I used to advocate chef-solo, but there have been some recent improvements in both tools and processes surrounding chef server. I now firmly believe you're not using chef properly if you omit the server.
In brief:
Chef 11 has made massive improvements in setting up your own chef server. You can even use chef-solo to bootstrap your chef infrastructure using the chef-server cookbook.
Bootstrapping nodes against chef server provides the tracking features you're missing. For example you can write handlers that can store pretty much anything about your nodes at runtime. This data is indexed by chef server and available via its REST API.
Some great new tools are available for managing cookbooks. Berkshelf will manage the download and upload of cookbooks and spiceweasel will generate all those nasty knife commands.
chef zero is being positioned as a better chef solo. I personally think it serves a different use-case, but an interesting tool especially for testing your chef recipes that require searching.
Currently we are running a C# (built on Sharepoint) project and have implemented a series of automated process to help delivery, here are the details.
Continuous Integration. Typical CI system for frequent compilation and deployment in DEV environment.
Partial Package. Every week, a list of defects accompanied fixes is identified and corresponding assemblies are fetched from the full package to form a partial package. The partial package is deployed and tested in subsequent environments.
In this pipeline, there are two packages going through are being verified. Extra effort is used to build up a new system (web site, scripts, process, etc) for partial packages. However, some factors hinder its improvement.
Build and deploy time is too long. On developers' machines, every single modification on assemblies triggers around 5 to 10 minute redeployment in IIS. In addition, it takes 15 minutes (or even more) to rebuild the whole solution. (The most painful part of this project)
Geographical difference. Every final package is delivered to another office, so manual operation is inevitable and package size is preferred to be small.
I will be really grateful to have your opinions to push the Continuous Delivery practices forward. Thanks!
I imagine the reason that this question has no answers is because its scope is too large. There are far too many variables that need to be eliminated, but I'll try to help. I'm not sure of your skill level either so my apologies in advance for the basics, but I think they'll help improve and better focus your question.
Scope your problem to as narrow as possible
"Too long" is a very subjective term. I know of some larger projects that would love to see 15 minute build times. Given your question there's no way to know if you are experiencing a configuration problem or an infrastructure problem. An example of a configuration issue would be, are your projects taking full advantage of multiple cores by being built parallel /m switch? An example of an infrastructure issue would be if you're trying to move large amounts of data over a slow line using ineffective or defective hardware. It sounds like you are seeing the same times across different machines so you may want to focus on configuration.
Break down your build into "tasks" and each task into the most concise steps possible
This will do the most to help you tune your configuration and understand what you need to better orchestrate. If you are building a solution using a CI server you are probably running using a command like msbuild.exe OurProduct.sln which is the right way to get something up and running fast so there IS some feedback. But in order to optimize, this solution will need to be broken down into independent projects. If you find one project that's causing the bulk of your time sink it may indicate other issues or may just be the core project that everything else depends on. How you handle your build job dependencies is dependent up your CI server and solution. Doing it this way will create more orchestration on your end, but give faster feedback if that's what's required since you're only building the project that had the change, not the complete solution.
I'm not sure what you mean about the "geographical difference" thing. Is this a "push" to the office or a "pull" from the offices? This is a whole other question. HOW are you getting the files there? And why would that require a manual step?
Narrow your scope and do multiple questions and you will probably get better (not to mention shorter and more concise) answers.
Best!
I'm not a C# developer, but the principles remain the same.
To speed up your builds, it will be necessary to break your application up in smaller chunks if possible. If that's not possible, then you've got bigger problems to attack right now. Remember the principles of API's, components and separation of concerns. If you're not familiar with these principles, it's definitely worth the time to learn about them.
In terms of deployment - great that you've automated it, but it sounds exactly the same as you are building a big-bang deployment. Can you think of a way to deploy only deltas to the server(s), are do you deploy a single compressed file? Break it up if possible.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm currently trying to set up automated deployment for our node.js based system. I've been doing quite a bit of research, but nothing really has jumped out as the obvious choice of tool to automate what I'm trying to do, which can be summarised as:
Pull code from central Mercurial repo into build-server build directory.
Concat/Minify relevant client side JS
For each server :
SSH into box
copy relevant files over SSH (SCP or whatever) (different code for different server roles)
restart relevant processes.
I'm probably going to use Jenkins for the high-level management of this, but am undecided on the tool to use to actually script the work.
It doesn't have to be a JS based build script, but that's an option (although I'm not entirely convinced that JS is the right language for this stuff anyway). Would be ok with Python or Bash style solutions.
What's a sane/robust choice capable of the tasks listed above?
Thank you!
UPDATE: Sorry, I didn't mention before, but ideally I'd like to have the build tasks run on a central Build/Deployment server, and not locally on the development machines.
Nowadays I am using Capistrano for all my deployment needs. Be it PHP, Ruby or Node.
There are recipes for almost all situations, but with experience, it is easy to build your own. You can hook your own commands to certain events in deployment process.
Capistrano uses SSH to access production or staging servers and issue commands remotely.
Here are some recipes for node.js (but I have not tried them):
https://github.com/loopj/capistrano-node-deploy
In case it is of any value to users in the future, I ended up going with Fabric.
If you insist on using your own servers to host the app, you can always use grunt.js for the automation. You can write custom tasks for it and do whatever you want, or find some for the mentioned cases in the community. I believe minification and such already exist.
As a personal recommendation, though, I can say I've been happy with hosting my node apps on NodeJitsu (paid service). They provide a command-line utility installed through npm, which can copy your code to their cloud, do a snapshot and start the app automatically. This is the easiest deployment scenario I've ever done.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I have a closed-source Linux application that I want to distribute. This application is using wxWidgets/GTK so there is a huge list of shared libraries (60+) that this application depends on.
What is the prefered way to publish the application and support the maximum number of distros?
Is it to build the application for each supported distribution and publish them separately? This has the drawback of being complicated to build (a chroot and a build per distro) and will only work on supported distribution.
Is it to add all shared libraries in the installer and use them with the LD_LIBRARY_PATH env variable (like VMware)? This has the drawback of increasing the size of the installer.
Is it to build a completely static application? This is surely not possible as it will break some licenses.
Is it a mix of that or another option? How do most commercial vendors publish their own graphical (preferably GTK-based) application?
You should have a look at the Linux Standard Base. It's designed specifically to help people in your position. It defines an environment that 3rd party application developers can rely upon - so there's set version of libc and other libraries, and certain programs and directories live in known places. All of the main Linux distribution support LSB.
That said, you should still probably package the result specifically for each major distribution - just so that your customers can manage your app with their familiar package management tools.
Basically, there are two ways. You can chose both, if you wish.
The first way is the common way games and such do it. Make a lib/ subdirectory, use LD_LIBARY_PATH and include just about every shared library you need. That ensures a pain-free experience from your user, but does make the installer bigger and probably the memory footprint bigger as well. I would not even attempt to reuse preexisting libraries, as they would tend to disappear as upgrades are made to the system.
The second way is to provide distribution packages. These are generally not that hard to make, and will then integrate nicely with the distributions, and will furthermore seem a lot more welcoming to your customers. The 2 downsides are: You'll need to do this for each distribution (Debian, Ubuntu, SuSE, redhat is probably a good start), and you will need to maintain them: as time goes on, some libraries will no longer be available in a specific version, and thus the user will get dependency problems.
In your installer, check which libraries are installed and then download the binaries for those which aren't.
For additional comfort of your users, if there is no connection to the Internet, have the installer generate a key which you can enter on your website to receive a ZIP archive which you can then feed to the installer.
For utmost comfort, check which libraries are available on the target distro and ask the user to use the standard admin tool to install them. That way, you won't pollute the computer with different versions of the same library.
That said: It might be smarter to put your valuable code into a link library and then provide that as binary blob in a source package. This way, your code is as protected as it would be in a pure binary and users can compile the glue code on their favorite system without you having to worry about stuff.
I mean: How much worth is the part of your code which sets up the UI? How much will you lose when someone steals that?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
The situation:
I want to play around with IRC bots as general communications interfaces to other code I am investigating. The server hardware would be old and low-memory, but running on a relatively up-to-date Debian GNU/Linux install. I don't expect more than a hundred users at a time, tops, and probably in the single-digits most of the time. The interfaces are more of interest here than the server itself, so I'd prefer something relatively simple to maintain over something with a huge number of configuration and tuning options more useful to a larger site.
Referencing the Wikipedia comparison and the Google PageRank list against the available package list for Debian comes up with the following top contenders: Undernet (ircd-ircu), Ratbox (ircd-ratbox), and Inspire (inspircd). Unfortunately, I can't find any serious comparisons of them, so I'm hoping that asking here will provide a faster solution than just trying them one at a time until something frustrates me enough to move.
Unreal IRCd is full featured if a little complex on the setup.
During the past couple days I have been coding a bot with Python and IRCLib. Since I am coding the communication interface I needed to see the raw data transfered between the server and the client. So, I needed an IRC server which would support that. At first I was using IRCD, and it was totally fine. But after a while I realized that I was missing some features that IRCD did not have since it's outdated. So, after further research I found ngIRCd.
I compiled it from source with those options "--enable-sniffer --enable-debug". Now when I want to see the information sent between my bot and my client I only need to start the server with the -n and -s option. Like that : ngircd -n -s
Here is the website of the server : http://ngircd.barton.de/
Unreal IRCd is what I finally picked for hosting an IRCD. Why? Halfop, admin/protect, founder/owner, advanced operator acl, vHost via i:line and etc...
Also see
http://en.wikipedia.org/wiki/Comparison_of_IRC_daemons
http://www.howtoforge.com/linux_irc_server_anope_services
Use XMPP instead. IRC is not very well designed for your situation; it can be made to work, but it is a big pain.