Firebase hosting instantly makes available a new version of your website the moment you deploy it. At the same time, you are able to instantly roll back to a previous version. Usually a CDN requires several minutes before changes are propagated and the same applies for .htaccess files or similar that could do redirects making me think that they have a dynamic extra layer on top of the CDN. If they do, how might they handle the DNS stuff?
You have a custom domain name that uses a CNAME to reach Firebase's mysite.web.app and Firebase probably uses a CNAME to forward to Fastly's (Firebase's CDN provider based on a network lookup) domain e.g. firebase-customerid-mysite.fastlycdn.net
I can't exactly figure out how they deal with the instant version changes. They must use different folders in the CDN but I don't think they use a different subdomain for each version as that would require a new certificate etc and would not be so fast. So how do you redirect a whole domain to a subfolder? You could do that by changing the .htaccess file but that would also require several minutes. How do you think they do it?
Thanks in advance!
As far as I can sum it up firebase version change is similar to that of a git so no they don't create a different directory to deploy your latest version they just save the previous version in the .firebase or something folder (version control directory) and the subdomain doesn't change. Instead, the latest version gets deployed which changes we can see immediately due to no-cache which generally needs to be validated with the origin server before each reuse. So when the browser asks fastly, it checks with the firebase server if the resource has changed and if yes a new version of that resource is provided.
References:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
https://firebase.google.com/docs/remote-config/templates
https://docs.fastly.com/en/guides/how-fastlys-cdn-service-works
To achieve a similar result one can use git for version control (changes in real-time) and use the no-cache header (for resource validation). We can use any CDN as nearly all of them validate the resources.
Related
I've recently copied a Kentico 10 environment to a new development environment, and upgraded it to Kentico 12. I believe the problem occurred after the upgrade, not after the copy (of the site and database). The new environment is also using a different URL than the original, and I have a valid license key for the new URL.
The site functions correctly, except for images. Unfortunately, all media library images that are loaded using GetAzureFile.aspx are failing to load. Attempting to access one causes a redirect to
/CMSMessages/accessdenied.aspx?resstring=dialogs.badhashtext&hash=...
I assumed the problem might be a different CMSHashStringSalt in the new environment's web.config, but it is the same as that of the original environment.
Per this documentation, I have attempted to re-save some of the images to see if they would begin loading but that did not help.
Does anyone have any suggestions on how I might tackle this problem?
Thanks
I don't believe that GetAzureFile() is something you should be using "publically", I believe that's meant to be an internal handler. You should be using the permanent URLs for media libraries hooked up to external storage.
Also, your provider code could need to be updated. Check out the documentation.
I have been running a gitlab instance and today I was trying to setup the pages function. I followed the gitlab guides and google cloud docs, it seems my config file got corrupt or broke (by me ofc) and even the ssh was down (directly on google console) till I rebooted the VM. Now I'm able to see that the instance is working on the shell but can't get it back online, I have 3 options here, 1) I wait a day or so to see if this is a domain/dns issue, 2) keep trying to a recover the gitlab that only had 3 users and no projects, or 3) make a fresh one and try to setup everything well from the start. The only things bothering me is losing 2 users that came to my project organically.
What can I do here? I'm trying to fix the config file but at the same time I don't know if its a domain issue because I had to change some dns configs to set the subdomain. The only thing I cannot understand really is how or why did my shell went down for at least a hour after I changed the configurations for gitlab. And btw are snapshots the right way to make backups with gcloud ?
What can I do here?
Undo your changes; in other words, put things back they way they were before, when the system was working. To do that, you have to know exactly what you changed.
If you are not keeping your config file in version control, you should start to do that, as that will make it easier to track and control your changes.
I decided to answer my question since I know how this problem occurred and it may occur for others.
Conditions:
Have a Gitlab self hosted.
Try to setup DNS settings for Gitlab Pages and the URL stop
responding even if the Gitlab instance still runs on the machine.
Here we can see the problem was in the DNS setup.
In my case I setup different DNS cases in my DOMAIN service DNS settings. Instead this DNS setup have to be made in your HOST/SERVER side.
To properly make a Gitlab DNS:
the wildcard domain *.mydomain.com type A should be on the server config, in my case gcloud DNS. Find out the software you use for your main machine server config.
Its good practice to setup the domain and server without redirection and set the proper DNS on the DOMAIN settings. This way your domain will resolve the subdomains without need for redirection on the DOMAIN settings. Once you set a wildcard type A record you can or cannot make the subdomain as a CNAME example, subdomain.mydomain.com. or you can use a separate IP for the subdomain with a type A record.
In summary, when setting up Gitlab Pages DNS do not change your DOMAIN settings, change your SERVER DNS settings.
Our current deploy process goes something like this:
Use grunt to create production assets.
Create a datestamp and point files at our CDN (eg /scripts/20140324142354/app.min.js).
Sidenote: I've heard this process called "versioning" before but I'm not sure if it's the proper term.
Commit build to github.
Run git pull on the web servers to retrieve the new code from github.
This is a node.js site and we are using forever -w to watch for file changes and update the site accordingly.
We have a route setup in our app to serve the latest version of the app via /scripts/*/app.min.js.
The reason we version like this is because our CDN is set to cache JavaScript files indefinitely and this purposely creates a cache miss so that the code is updated on the CDN (and also in our users' browsers).
This works fine most of the time. But where it breaks down is if one of the servers lags a bit in checking out the new code.
Sometimes a client hits the page while a deploy is in progress and tries to retrieve the new JavaScript code from the CDN. The CDN tries to retrieve it but hits a server that isn't finished checking out the new code yet and caches an old or partially downloaded file causing all sorts of problems.
This problem is exacerbated by the fact that our CDN has many edge locations and so the problem isn't always immediately visible to us from our office. Some edge locations may have pulled down old/bad code while others may have pulled down new/good code.
Is there a better way to do these deployments that will avoid this issue?
As a general rule of thumb:
Don't do live upgrades. (unless the language supports it, but even then think twice)
Pulling code using git pull and then waiting for the app to notice changes to files sounds a lot like the 90's: uploading php files to an apache web server using ftp (or sftp if you are cool) and waiting for apache to notice that they were updated. It can't happen atomically, so of course there is a race condition. Some users WILL get a half built and broken site.
I recommend only upgrading your live and running application while no one is using it. Hopefully you have a pool of servers behind a load balancer of some sort, which will allow you to remove them one at a time and upgrade them.
This will mean that users will be able to use both the old and the new site at the same time depending on how and when they access it, but that is much better then not being able to access it at all.
Ideally you would be able to spin up copies of each of the web servers that you have running with the new version of the site. Check that the new version does work, and then atomically update the load balancer so that everyone gets bumped to the new site at the same time. And only once everything is verified to be working perfectly the old machines are shut down and decommissioned, or reused.
step 4 in your procedure should be:
git archive --remote $yourgithubrepo --prefix=$timestamp/ | tar -xf -
stop-server
ln -sf $timestamp current
start-server
your server would use the current directory (well, a symlink) at all times. no matter how long the deploy takes, your application is in a consistent state.
I'll go ahead and post our far-from-ideal monkey-patch that we're using right now.
We deploy once which may or may not go as planned, once we're sure the code is deployed on all the servers we do another build where the only thing that changes is the version number.
Then we deploy again server by server.
The race condition still exists but because the application code between the two versions is the same this masks the issue since no matter which server the CDN hits it gets the "latest" code.
I have a web application project in VS2012 which I'm publishing using a "Web Deploy Package". I want this package to include app-pool settings, specifically creating an IIS app-pool and assigning the newly created application to it.
I'm familiar with the option "Include application pool settings used by this Web project" available when the project is configured to use an IIS instance (not IIS Express), but IIS configuration is not part of the project file, and thus not source controlled. What happens when somebody builds a deployment package on a machine that hasn't had IIS meticulously configured? Not ideal.
How else then, can I go about getting AppPool settings into my web deploy package? I understand that the appPoolConfig provider is IIS7+ only, I'm fine with that limitation. I've banged my head against this issue in the past and never found a solution. 18 months later, we've got a new VisualStudio version, and a new web-publishing-pipeline, are there new options to address this? Or maybe something I missed when I first tackled this problem?
Edit
OK, I'm seeing the following as options:
Configure my project to sync settings from an IIS instance. As mentioned, I'm not a fan of this given that it puts settings outside of the project, meaning the environment has to be meticulously configured to build + publish. Plus it drags along other IIS settings I don't want included.
Inject something into the web-publishing-pipeline (WPP) to modify the archive.xml. I've toyed with this in the past and had limited success. One problem is the pipeline isn't exactly co-operative with working directly on the archive.xml file, another problem is some of the more cryptic attributes involved, like MSDeploy.MSDeployProviderOptions which appears to have some Base64 encoded binary? No idea what to put in there.
Find an existing "provider" that can do what I want. I might be out of luck here, the appPoolConfig provider only seems to want to read / write IIS, not, say, an XML file of settings. Does anybody know otherwise?
Write my own "provider" to produce manifest output entries. I'm not sure, is it possible to write a custom provider that writes to a manifest using the name of an existing provider? As in, MyCustomPoolProvider writes appPoolConfig sections into a manifest? This sounds like a potentially painful exercise that may or may not work. Would I still need to figure out the encoding of whatever is going into MSDeploy.MSDeployProviderOptions?
I get the feeling that the fundamental obstacle with Web Deploy for what I'm trying to accomplish, is how strictly it leans on "providers". The pre-existing providers are largely designed for IIS synchronisation, not primary development and publication. It so happens that some of these providers can be relatively easily hooked into via MSBuild, but the majority insist on pulling data from IIS, and that's that.
You are correct in your understanding of the appPoolConfig provider, in that it can only sync between App Pools and can't be provided with the configuration directly. What you could potentially do is keep a copy of the appPool in question in package form (ie. msdeploy -verb:sync -source:appPoolConfig=PoolName -dest:package=apppool.zip) and attempt to hijack the pipeline so that the MSDeploy call adds the application content into the package, leaving the existing content there.
Alternatively, you could always keep the packages separate and deploy them with different calls to MSDeploy.
FYI, MSDeploy.MSDeployProviderOptions is simply an encoded version of the parameters supplied to the provider when it was packaged. For example, -source:dirPath=c:\,ignoreErrors=0x10293847 -dest:package=package.zip would package the ignoreErrors value.
I am aware of weblogic templates, but out of curiosity I wanted to know, Is it ok to copy a domain in weblogic in situations where we need to have the same configuration? I have already done the same and have been successful in testing my application.
You can get away with doing this, but there are a couple of more reliable (and scriptable) ways to migrate the same configuration through the development team, or to create new deployment environments.
The domain template builder lets you build your own custom domain template from an existing domain: http://download.oracle.com/docs/cd/E13179_01/common/docs92/tempbuild/starttb.html
There's a couple of ways to get it done with WLST, as well:
You can use configToScript to spit out an entire WLST script (and properties file) to recreate the exact configuration you've got, or...
You can use readDomain and writeDomain in offline mode to recreate an existing configuration in a new domain:
readDomain: http://download.oracle.com/docs/cd/E13222_01/wls/docs92/config_scripting/reference.html#wp1003638
writeDomain: http://download.oracle.com/docs/cd/E13222_01/wls/docs92/config_scripting/reference.html#wp1003688
It's okay to copy the domains over and it worked exceptionally well prior to WebLogic 9.2. However, there are some weird bugs that pop up for versions that are using the portal for the console.
Also, after copying the file you would want to make sure that all listen addresses and ports have been modified accordingly so that your local managed server doesn't attempt to connect to the production administration server on startup.