Updating existing SPA project on production (Maintenance mode?) - frontend

I have an existing project which is live on some production server. There are a few users almost every time.
Front-end part is a SPA application (it's important)
On backend we have Maintenance mode which we are enable when we want to push our new features (it's bad because now we have a possibility to reach 0-downtime but it doesn't matter)
My question is: Which is the most correct way to do front-end production updating?
1) should we just remove old files and copy the new files to the production folder? (but in this case users might click on some link and the OLD BUNDLE will be fetched from server and we receive 404. Isn't it?)
2) Should we return some Notification Page (eg. Maintenance. Please visit our site later) while this process is running?
What alternatives exist?
Thank you for any help!

Related

Deploy Blazor Server Side app without stopping site

What is the recommend strategy to deploy a Blazor server SPA on a windows server machine with IIS without having a (even very short) downtime?
When I do some changes, I publish the project on local folder but then I have to manually stop the site on the server, else the copy hangs due file access lock.
Thanks
In my opinion, if you want to achieve zero downtime publish the new site, the only way is using two server.
You should use two servers and IIS load balance. You could firstly modify the load balance to transfer all the request to the second server and then publish the application to the first server.
After you tested the first server and make sure the first server is working well. Then you could transfer all the request to first server.
More details about how to use IIS loadbalance, you could refer to this article.
At the end, I created a .bat that before copying to production server rename a App_offline.xxx file in App_offline.htm, copy the files and then do the vice versa renaming.
Some seconds of downtime (with a message "server updating, refresh in few seconds...") much better that manual stop the site and then restart.

deploy single file on node.js production server without restarting server and down time

We are planning to start our new project in node.js but I want to be clear on the deployment side. Previously we have some applications written in asp.net. We deploy our applications as websites so when some page changes we deploy only that page on the web server and we don't need to restart the website or server. This type of deployment where we can deploy a particular page reduces our risk of letting those changes go on the production which we don't want to release as opposed to the mvc where you have to deploy the dll.
So my question is that can we do that kind of deployment on our node.js production site where I can deploy only one changed file lets say 'abc.js' and I don't need to restart the server and the connected users to my site shouldn't experience any issue or disconnection as in the case with .net mvc where user's session is ended.
Sorry if this seems too basic but I need to know.
Thanks

What is the purpose of 1 click installation of Drupal on Web hosting?

I am in process of making first Drupal 7 website and for that searching for the web hosting. And I found that, several hosting says 1-click Drupal installation. But as I was searching the net for standard practice of site development, many sources explains Develop the site on local environment and then transfer to the web server (which includes, transfer of database, and whole drupal with modules) which is quite convincing that you develop locally and transfer to web so it start working there as it was working on locally.
On the other hand, what is the use of 1 click drupal installation on web server, I believe it will install the fresh core drupal, so from again initially I have to start developing, installing each module so, starting from square 1.
So, which is the BEST practice for making web site live, shall i develop locally first or develop directly on web server?
Simultaneously, what is the best practice about maintaining site, I read that, there should be One live site where visitor will come, Second Test site which is similar to live, One local site, So what is the standard practice for this, and how to maintain?
Very thanks in advance.
In my previous answer in point 2 i outline 4 servers: DEV, STAGE, QA and PROD. this is usually the process on kind of "biggish" company where lots of people might be working in infrastructure, development and qa department. This said, if you are not working in a complex environment, you might just have 2 drupal installations, one for testing , on DEV (e.g dev.mysite.com) and one live (eg.mysite.com). The different url can be arranged from you cpanel or personal panel in case of a shared server.
They might run on the same server, however the dev site is the one you will be working on while creating the site, then you will clone the dev and make it live once the site is ready.
You will keep the dev site as a space to test new features, fix bugs, test updates of module or core files. Once these new features are implemented, or bug fixed, you will replicate the same steps on the live site.
GIT is a version control system: it allow you to keep track of the code you are working on, you might want to create 2 branches: DEV and MASTER.
You will be working on DEV to create the site or update files or fix bugs, and you will merge to LIVE and pull on the live server the code once it's stable. I hope this clarify a bit.
1) one click installation process are usually offered on shared servers, they then might have lower performance and memory limit that your local lamp set up. It is the good to check what is the version of PHP and MySQL that runs on the server, as well as max upload file limit or connection time. I prefer start working locally and then publish on my server, BUT if you install on your server first you will have a good idea of how drupal will perform in the real world, then you can always clone the site and db on your local, and also you will avoid the ugly surprise of trying to move your site from local to your server and find out bugs or migration issues.
2) in enterprise dev environment you might have 3/4 steps, DEV ("wild west") STAGE (release candidate) QA (quality assurance server) PROD (live site). You usually sync (eg with GIT) your local to DEV or STAGE , than you push to QA, then if it's all good to PROD

Access 2013 web app - restoring previous app snapshot package without reverting data (structured staging environment)

I have a reasonably complex Access 2013 web app which is now in production on hosted O365 Sharepoint. I would like to take a backup (package snapshot) into a test environment, and then migrate this to production once development is complete (I certainly don't want to do development on the production system!). The problem is that the snapshot also backs up all data so uploading the new package over the top of the existing package in the sharepoint app repository reverts the data to the time of snapshot as well. Alternatively, rolling back to the original snapshot if there are issues would lose all data after the new package was applied.
I can easily get a second version of the app going by saving as a new application etc but this creates a new product ID etc in the app store. We also use a Access 2013 desktop accdb frontend to hook directly into the Azure SQL database to do all the stuff that the web app can't provide (formatted reports etc) so I cant just create a new app every time without dealing with all of the credential and database renaming issues.
So my question is, does anybody know how to safely operate a test environment for Access 2013 web app development? One needs to be able to apply an updated version, or rollback to the old one if there are problems without rolling back the data. With the desktop client I can just save a new copy of the accdb file every time obviously. I dont mind creating a new instance or link to the app on sharepoint each time, however this obviously generates a totally new database (server name, db location, login id's etc) as well. You would hope there is a way to upload and replace your app without touching the data, else how else can you develop without working directly in production?
Any answers would be really appreciated.
Thanks.

Deployment race condition causing CDN to cache old or broken files

Our current deploy process goes something like this:
Use grunt to create production assets.
Create a datestamp and point files at our CDN (eg /scripts/20140324142354/app.min.js).
Sidenote: I've heard this process called "versioning" before but I'm not sure if it's the proper term.
Commit build to github.
Run git pull on the web servers to retrieve the new code from github.
This is a node.js site and we are using forever -w to watch for file changes and update the site accordingly.
We have a route setup in our app to serve the latest version of the app via /scripts/*/app.min.js.
The reason we version like this is because our CDN is set to cache JavaScript files indefinitely and this purposely creates a cache miss so that the code is updated on the CDN (and also in our users' browsers).
This works fine most of the time. But where it breaks down is if one of the servers lags a bit in checking out the new code.
Sometimes a client hits the page while a deploy is in progress and tries to retrieve the new JavaScript code from the CDN. The CDN tries to retrieve it but hits a server that isn't finished checking out the new code yet and caches an old or partially downloaded file causing all sorts of problems.
This problem is exacerbated by the fact that our CDN has many edge locations and so the problem isn't always immediately visible to us from our office. Some edge locations may have pulled down old/bad code while others may have pulled down new/good code.
Is there a better way to do these deployments that will avoid this issue?
As a general rule of thumb:
Don't do live upgrades. (unless the language supports it, but even then think twice)
Pulling code using git pull and then waiting for the app to notice changes to files sounds a lot like the 90's: uploading php files to an apache web server using ftp (or sftp if you are cool) and waiting for apache to notice that they were updated. It can't happen atomically, so of course there is a race condition. Some users WILL get a half built and broken site.
I recommend only upgrading your live and running application while no one is using it. Hopefully you have a pool of servers behind a load balancer of some sort, which will allow you to remove them one at a time and upgrade them.
This will mean that users will be able to use both the old and the new site at the same time depending on how and when they access it, but that is much better then not being able to access it at all.
Ideally you would be able to spin up copies of each of the web servers that you have running with the new version of the site. Check that the new version does work, and then atomically update the load balancer so that everyone gets bumped to the new site at the same time. And only once everything is verified to be working perfectly the old machines are shut down and decommissioned, or reused.
step 4 in your procedure should be:
git archive --remote $yourgithubrepo --prefix=$timestamp/ | tar -xf -
stop-server
ln -sf $timestamp current
start-server
your server would use the current directory (well, a symlink) at all times. no matter how long the deploy takes, your application is in a consistent state.
I'll go ahead and post our far-from-ideal monkey-patch that we're using right now.
We deploy once which may or may not go as planned, once we're sure the code is deployed on all the servers we do another build where the only thing that changes is the version number.
Then we deploy again server by server.
The race condition still exists but because the application code between the two versions is the same this masks the issue since no matter which server the CDN hits it gets the "latest" code.

Resources