Our current deploy process goes something like this:
Use grunt to create production assets.
Create a datestamp and point files at our CDN (eg /scripts/20140324142354/app.min.js).
Sidenote: I've heard this process called "versioning" before but I'm not sure if it's the proper term.
Commit build to github.
Run git pull on the web servers to retrieve the new code from github.
This is a node.js site and we are using forever -w to watch for file changes and update the site accordingly.
We have a route setup in our app to serve the latest version of the app via /scripts/*/app.min.js.
The reason we version like this is because our CDN is set to cache JavaScript files indefinitely and this purposely creates a cache miss so that the code is updated on the CDN (and also in our users' browsers).
This works fine most of the time. But where it breaks down is if one of the servers lags a bit in checking out the new code.
Sometimes a client hits the page while a deploy is in progress and tries to retrieve the new JavaScript code from the CDN. The CDN tries to retrieve it but hits a server that isn't finished checking out the new code yet and caches an old or partially downloaded file causing all sorts of problems.
This problem is exacerbated by the fact that our CDN has many edge locations and so the problem isn't always immediately visible to us from our office. Some edge locations may have pulled down old/bad code while others may have pulled down new/good code.
Is there a better way to do these deployments that will avoid this issue?
As a general rule of thumb:
Don't do live upgrades. (unless the language supports it, but even then think twice)
Pulling code using git pull and then waiting for the app to notice changes to files sounds a lot like the 90's: uploading php files to an apache web server using ftp (or sftp if you are cool) and waiting for apache to notice that they were updated. It can't happen atomically, so of course there is a race condition. Some users WILL get a half built and broken site.
I recommend only upgrading your live and running application while no one is using it. Hopefully you have a pool of servers behind a load balancer of some sort, which will allow you to remove them one at a time and upgrade them.
This will mean that users will be able to use both the old and the new site at the same time depending on how and when they access it, but that is much better then not being able to access it at all.
Ideally you would be able to spin up copies of each of the web servers that you have running with the new version of the site. Check that the new version does work, and then atomically update the load balancer so that everyone gets bumped to the new site at the same time. And only once everything is verified to be working perfectly the old machines are shut down and decommissioned, or reused.
step 4 in your procedure should be:
git archive --remote $yourgithubrepo --prefix=$timestamp/ | tar -xf -
stop-server
ln -sf $timestamp current
start-server
your server would use the current directory (well, a symlink) at all times. no matter how long the deploy takes, your application is in a consistent state.
I'll go ahead and post our far-from-ideal monkey-patch that we're using right now.
We deploy once which may or may not go as planned, once we're sure the code is deployed on all the servers we do another build where the only thing that changes is the version number.
Then we deploy again server by server.
The race condition still exists but because the application code between the two versions is the same this masks the issue since no matter which server the CDN hits it gets the "latest" code.
Related
No idea what happened... It was working and then it wasnt.
I am currently building a web app and decided to take some time off from the product side and build a landing page.
For some reason, I decided to build the landing page on a separate Github branch. So I checked out to a new branch, deleted everything, and started working on the landing page.
I soon realized this is a terrible idea and created a new repo to store my landing page.
I checked back into my master branch and spun my Node server up but for some reason now, everything is timing out. I opened Postman and tried hitting some of my endpoints but after like 3 minutes of loading, it tells me that it could not get any response and that there was an error connecting to localhost:3001/api/posts
In my terminal, all I see is this when I hit the route:
GET /api/posts - - ms - -
This has never happened to me before and I am completely clueless on WTH happened.
I tried deleting my local stuff and re-cloning the repo and installing my dependencies but to no avail...
Would love to know if someone has an idea on what's going on.
Check first is this isn't because of another process already listening on that port (but using resources which were deleted or not properly updated)
Closing applications or even rebooting can help you asserting if the issue is permanent or just linked to your current session.
The OP Syn points out in the comments the ~/.env file missing
.env files allow you to put your environment variables inside a file.
You just create a new file called .env in your project and slap your variables in there on different lines.
To read these values, there are a couple of options, but the easiest is to use the dotenv package from npm.
npm install dotenv --save
Note: it is generally not versioned, as it includes potentially sensitive date.
I'm writting a script that sets up a lot of different applications in Windows (mainly svn and open source servers for http, dns, mail, ftp and db). This script is intended to be executed in new/clean Windows workstations for new developers, it automatically sets everything up to create an environment very similar to the one in production. After it's executed, everything runs locally and the developer can start working right away.
This not only helps new developers, but all existing developers whenever there are changes in the whole system, everything is replicated locally.
The one thing I'm still not able to do is making some kind of backup of an IIS server that is running a web app (it's in the Prod server) and restoring it automatically to the new developer's machine so he doesn't have to install/configure IIS locally.
I've read about using appcmd.exe to create and restore backups, but that works only for the same machine (it uses encryption keys and those keys change between computers).
Is there a way, a scriptable way, to take everything IIS related from one server and restore it on another server, without user intervention and having the restored IIS run exactly as the original?
Thanks in advance!
Francisco
Just putting this here so anyone who comes across this will have an understanding as to why this wasn't answered. A website has a massive amount of variables associated with it that prevents any easy methods to copy all of its configuration through one or even just a few cmdlets.
To get started though you would want to become very familiar with the applicationHost.config file and how you access the properties within it using the Get-WebConfigurationProperty. One way to get familiar with how to script against webconfiguration properties is to use the Configuration Editor in IIS. Whenever you make a change in the Configuration Editor, before commiting the changes there is a nifty little link titled Generate Script, which will have a Powershell tab you can use to help you gather the proper Get/Set commands for the configuration elements within the applicationHost.config file.
I've created something almost exactly like what the OP is looking for and it spans 4 modules (over 20,000 lines of code) and has a SQL backend that holds all of the configuration elements.
When a website has everything from underlying DLLs that may need registered, IsapiCGI Restrictions and IsapiFilters, accounts that are tied to the AppPool that may need added to certain local groups on the server, to secure bindings that require a certificate to be loaded on the server. You can see that this isn't a simple undertaking. (and these are just a small portion of the variables that a website may contain)
There is however a large chunk of cmdlets that Microsoft provides you out of the box that you can leverage to aid you in developing something like this inside the WebAdministration module. I know this is four years old but hope anyone who stumbled on this will find the above useful.
I'm wondering if someone can enlighted me a little bit on the Xpages build process and how this works with other replica copies of a database. Much of the advice I've seen posted regarding working with the the Domino Designer indicates (logically), that you'll have much faster response working on local copies and then replicating those to the server.
I'll usually save my changes locally, build manually, and replicate to the server, and most of the time, that seems to work fine. However, on some occasions, I've found that when I view the work I've done in the browser on the server copy, it hasn't seemed to update... in fact in a couple of scary incidents, it displays a version from several weeks ago (where is it even getting that from??). This isn't a browser caching issue, and I've opened the design elements (xpages, custom controls) on the server copy and verified that the changes ARE there. I end up having to perform a Clean on the server copy (not just a build) of the application, and then it displays as expected.
This seems like a foolish question, but you shouldn't have to perform a build on each replica copy correct? Any thoughts as to what might be an issue here? There is another developer involved, and he works directly on the server as he's in the same location, but we are rarely working at the same time, and never on the same elements. We are not using source control at this time.
We have seen similar behavior ourselves.
In our case, we do development on a server, clean / build project and then copy that database as a template to a deployment server. From there, we update design in the production database.
We have noticed that build process sometimes fails, especially when working over slower links. So we always repeat clean / build / refresh process a couple of times and we try to do it while in office with fast connection between the work stations and the server.
We haven't experienced build problems lately, so this repeating of build process obviously helps.
We have also seen that replicating design between local and server copies sometimes causes build related problems, which could explain the problems you are seeing. We have stopped using replication because of that and are now always working on the server copy directly.
I don't think that your not-using of source control software has anything to do with it.
I usually do all changes inside local template, then perform "Project \ Clean", then update design in server database. It works in 99% of cases. If not, I perform "Project \ Clean" once again. I hate this, but looks like it's the only way to get consistent code on production.
I have a web-site that needs to be up all the time. I also, of course, need to do new releases. Each page tends to be very long-lived, with lots of JavaScript doing AJAX calls to the server.
What I do is build a new WAR file and put it in Tomcat's webapps directory, which ends up looking like this:
20110701-7f077d 20110711-aa8db4 20110715-6f4a12
20110701-7f077d.war 20110711-aa8db4.war 20110715-6f4a12.war live
The war file is named after the date of its release and the first few characters of its GIT commit-id, just so I can keep track of everything. Tomcat automatically unpacks the war file into a directory of the same name. The live directly just contains a file giving the name of the "live" version.
This way, each user can continue using the version of the back-end that works with the version of the front-end that he has loaded into his browser. And obviously, version upgrade and reversion is painless.
Now, I'm switching to node.js and I want to do the same thing. I am reliably informed that node.js doesn't support independent applications in one instance. So, what to do?
The only thing I can thing of is to designate n slots (where n is some small number like 10 or 100), and each slot corresponds to a port (i.e., slot 1 is 8001 and so on), put Apache in front of several node.js instances, each representing a slot, and Apache would use mod_proxy or mod_redirect to proxy requests like '/slot01' to port 8081. "live" would point to the current slot.
This would be clumsy and error prone and require an otherwise useless Apache instance and most of all I cannot believe that node.js doesn't have a good solution to what seems like a near-universal problem.
You can use node-http-proxy and write some code to monitor your 'deployment directory' for new versions and when such versions are found you can start the corresponding script and proxy it under the directory name (to make myself clear if you find a new directory 'version-11-today' your parent node-http-proxy script could start the new script assigning it a port passed as a parameter and then proxy to the new app under the path '/version-11-today').
A similar solution could be done with nginx only in this case you could write a script to monitory the deployment directory and generate some new nginx configuration when new apps are found.
If you are afraid you might run out of ports I believe both node.js and nginx can run on and proxy unix sockets besides inet sockets.
An advantage of the above is that each app runs in its own process protecting the other apps from crashes and enabling individual app restarts.
A third solution if you are not afraid some bug will crash your app is to have a parent script that loads all the app versions in the same process and maps them under different paths depending on the directory they were found in. You can still restart your server without downtime such as in this example http://codegremlins.com/28/Graceful-restart-without-downtime
I'm working on a website with some other people. Usually when we want to modify something, we do the change on our machine and just upload the new version with ftp, hope it'll works (or that nobody will notice it doesn't the time we correct it) and that's it.
It's already not the best way to work alone but even less to work collaboratively so I'm asking advices.
I think that a solution like svn/git/mercurial could help me. I found bitbucket which allows free private repository with mercurial. But still after, how can I upload the changes I did to the ftp and make sure the version I've on my computer is the same than the one on the server.
We are all doing it during our free time (not paid) and some people comes and leave every year so I'm looking for something free, easy to use (explain to everyone why we should use a DVCS is already hard) and which doesn't rely on a specific person.
The server we are using to host the website is a cheap one and doesn't allow the use of ssh, svn,...
Thank you
Version control will not help with the issue you are describing - namely, uploading untested changes to a production site.
What you (and your team) need, is better quality control procedures - you need a test website and a tester (QA) person. The process would be:
Make a change
Update the test website
Have the update and the whole website signed off by QA
Update the production/live site
What you will gain by using version control (CVS, SVN, Git or anything else) is recoverability - you will be able to go back to a version before any breaking change. It will still not solve the issue of "the new code broke the site".
You want scheduled releases.
Commit and update code regularly
Code freeze or develop in a branch and merge to the trunk
test on a staging environment
Find a bug goto step 1
Release
You need to understand that what represents your latest correct working build is not what's on the server but in your source repository whether that be SVN or just the file system. Anything as long as it isn't the live server! Make sure everything works locally as expected then unless the site is huge (I guess not given your situation) deploy it in its entirety as a single version.