I'm having some trouble visualizing how I should handle static assets with my EB Node.js app. It only deploys whats committed in the git repo when you do eb deploy (correct?) but I don't want to commit all our static files. Currently we are uploading to S3 and the app references those (the.s3.url.com/ourbucket/built.js), but now that we are setting up dev, staging, and prod envs we can reference built.js since there can be up to 3 versions of it.
Also, there's a timespan where the files are uploaded and the app is rolling out and the static assets don't work with the two versions up on the servers (i.e. built.js works with app version 0.0.2 but server one is deploy 0.0.1 and server two is running version 0.0.2)
How do keep track of these mismatches or is there a way to just deploy a static assets to the EB instance directly.
I recommend a deploy script that uploads relevant assets to S3 then perform the Elasticbeanstalk deploy. In that script, upload the S3 assets to a folder with the name of the env, so you'll have
the.s3.url.com/ourbucket/production/
the.s3.url.com/ourbucket/staging/
the.s3.url.com/ourbucket/dev/
Then you have the issue of old assets during deploy - in general, you should probably be CDNing these assets (I recommend CloudFront because its so easy to integrate once you're already on AWS) and you should be worrying about cache invalidation during deploy anyway. One strategy for dealing with that is to assign an ID to each deploy (either the first 7 letters of the git sha1 or a timestamp) and put all assets in a new folder with that name, then reference that on your HTML pages. So lets say you go with timestamp, and you deploy at 20150204-042501 (that's 4:25 and 1 second UTC on February 4, 2015) so you'd upload your assets to the.s3.url.com/ourbucket/production/20150204-042501/. Your HTML would say
<script src="//the.s3.url.com/ourbucket/production/20150204-042501/built.js" />
That solves both the "during deploy" problem and cache invalidation.
Related
I am running a CI/CD pipeline on Gitlab.
I am building a Next.js app and zipping the .next directory and package.json as an artifact so I can upload to AWS S3.
Things are working as expected, except...
All of the files in the public directory are not included with the build.
When I build locally, the images are saved within .next/cache/images as expected/. On Gitlab, they are not. There is only the webpack directory within the cache folder.
This is confirmed in the Gitlab job CLI output and the artifact being stored, as there is no images directory in the cache directory UI as described.
Any ideas??
The reason for this was the fact that the public folder is not actually stored initially in the .next directory when built. The images are stored there at runtime and cached.
I was unaware of this... and noticed it as I was developing thge Next.js app locally, and was confirmed in a random thread in a next.js issue thread I happened upon.
If anyone has reference to this in the docs, please share. Perhaps I'm just misreading the docs, but I did not know I had to do this when preparing a zip for CI/CD deployment.
Im trying to test different things on Azure, and i have tried to setup a static webapp.
The github repo consists of nothing really.
These files are pretty much empty. Whenever i push something to the repo, the build triggers, but fails with the error Oryx built the app folder but was unable to determine the location of the app artifacts. Please specify the app artifact location through the Github workflow file.
Im uncertain of what it means with artifacts. This tiny webapp doesnt have any real build or anything
In the azure-static-web-apps-xxxxxx-xxxx-xxxxxxx.yml file the app_artifact_location is " "
Not really sure which artifacts its looking for, or what they are really.
You may want to investigate this Quickstart: Building your first static web app.
To add to this app_artifact_location is the location of the build output directory relative to the app_location. For example, if your application source code is located at /app, and the build script outputs files to the /app/build folder, then set build as the app_artifact_location value.
In step Build and Deploy builds and deploys to your Azure Static Web Apps instance. Under the with section, you can customize these values for your deployment.
Reference: https://learn.microsoft.com/en-gb/azure/static-web-apps/github-actions-workflow
Based on https://learn.microsoft.com/en-us/azure/static-web-apps/overview
Python-based frontends are not supported. Clients are served statically and are browser-based apps. The build system (0ryx) likely unable to determine the framework and is generating the error message you are seeing.
The default web page that is expected is index.html. It looks like you may have misspelled it = index.hmtl
I'm developing a nodeJS application using nextJS and an expressJS application. And I'm using an own gitlab instance for managing the git repository.
But the current application should not be deployed to a webserver at the end, but I need to create decentralized productive application. To make it a bit clearer:
Developing the application locally
Push application to my remote server
My customers should be able to get the productive app code from my remove server
Customers will run the application on there local environment - should be able to pull new versions from the remove server
So the application itself won't run on my remote server, but on the local server of the customers.
Normaly I would use my CI to test and build the application (which is be done by npm run build). Then I build an docker image which I use to run the application on my server. But all that is normaly working on the same server.
In this case I need to build the application and serve it to the customers / the customers should be able to pull the productive code. How can this be done.
Maybe I lose sight of the wood for the trees... and that's why I'm asking for help/hints.
There are a number of ways you can do this and a number of tools you can use as well. You probably want a pipeline similar to the following.
Code is developed locally, committed, and pushed to the self-hosted gitlab.
GitLab CI, (or any other CI configured) will then run CI of your code.
The final step of the CI is to create a "bundle" of your application. This is probably a .zip or similar and this will be pushed to a remote storage location. It is also possible to ensure that this is done only when pushing to specific branches (such as master).
You can use a number of things as your remote storage location, such as some sort of AWS S3 bucket, or something more complex such as Nexus (there are many free alternatives).
You would then want to give your customers access to either this storage location (if you're using something like S3, or Digital Ocean Block Storage, etc), or access to your distribution repository (such as Nexus).
You should be able to generate some sort of SSH key that you can put on your GitLabCI server and use to publish to these places. It should then be a simple case of making a HTTP call to upload a file to the relevant source. This would often be called when everything has been successful, and only for specific branches. For example if all your tests pass and you're on the master branch, zip up all your code and make a HTTP call to push the new zip file to AWS S3 which your customers have access to.
For further ideas, you could make your storage / distribution location into an FTP server if you wanted to, or a local network drive depending on what your needs are for distribution. If you're just dealing with docker for your customers, then I'd suggest building a Docker image and self-hosting a docker registry. Push to that registry after you've built the image, and that would be the end of your CI run.
As a side note, if your customers are using docker you could create a docker image either push it to a registry or export it as a .tar and upload it to a file storage location (S3 for example). This would make things simple for your customers and ensure you control the image creation step (if that's something you want to manage).
The gitlab ci docs might help you with the specifics of uploading artifacts to various locations.
Is it possible to setup continuous delivery for a simple html page in under 1 hour?
Suppose I have a hello world index.html page being hosted by npm serve, a Dockerfile to build the image and a image.sh script using docker build. This is in a github repo.
I want to be able to check-in a change to the index.html file and see it on my website immediately.
Can this be done in under 1 hour. Either AWS or Google Cloud. What are the steps?
To answer your question. 1 hour. Is it possible? Yes.
Using only AWS,
Services to be used:
AWS CodePipeline - To trigger Github webhooks and send the source files to AWS CodeBuild
AWS CodeBuild - Takes the source files from the CodePipeline and build your application, serve the build to S3, Heroku, Elastic Beanstalk, or any alternate service you desire
The Steps
Create an AWS CodePipeline
Attach your source(Github) in your Pipeline (Each commit will trigger your pipeline to take the new commit and use it as a source and build it in CodeBuild)
Using your custom Docker build environment, CodeBuild uses a yml file to specify the steps to take in your build process. Use it to build the newly committed source files, and deploy your app(s) using the AWS CLI.
Good Luck.
I think I would start with creating a web-enabled script which would be a Github commit hook. Probably in Node on a AWS instance which would then trigger the whole process of cleaning up (deleting) the old AWS instance and reinstalling a new AWS instance with the contents of your repository.
The exact method will be largely dependant on how your whole stack is setup.
Context
Web application project has a /build (or /dist) folder with front-end files, generated during build (by Gulp). This folder is not under the source control (see, for example: React.js Starter Kit)
The server-side code doesn't require bundling or compilation step, so the /src folder from your project can be deployed as it is (these source files are used to run Node.js or ASP.NET vNext server)
Web application is deployed via Git (see Git-based deployment options in Heroku or Windows Azure as an example)
Questions
Is it better to build (bundle and minify) front-end files before or after deployment?
If before, you may end-up having a separate repository (or branch), with the /build folder under the source control alongside with the rest of the project files. This repo is used solely for deployment purposes.
If after, the deployment time may increase - time needed to download additional npm modules used in the build process, the server's CPU may spike up to 100% during the build, potentially harming your web application's responsiveness.
Is it better to build front-end files on the remote server before or after running KuduSync command?
If you deploy your web application to Windows Azure with Kudu, should the deployment script copy only the contents of the /build folder (with public, front-end files like .js, .html, .css) to /wwwroot? As opposed to copying all the project files (server-side source code and front-end bundles), which it does by default.
By default Azure's deployment script copies all the project files, from D:\home\site\repository folder to D:\home\site\wwwroot folder, and then Node.js app is started from there. Is it a necessary step? Why not to start the Node.js (or ASP.NET vNext) app from the D:\home\site\repository folder? And if it indeed should be copied to a separate folder, why source files are placed in wwwroot, maybe it's better to copy them to another folder, outside wwwroot?
I am not familiar with both Azure and Heroku so I can't give any ideas about those specific deployment options.
I am using (4 dedicated servers with 2 of them solely for serving static files), the option to build the bundled and minified javascript files (for front-end) and add all those files to the main repository has several advantages
You only need to run it once (either on your dev machine or on staging server, whatever way you want). This is particularly helpful when you have to run multiple static servers since you don't have to run the build command on each server. One might argue that they can use something like Glusterfs to synchronise files from one static server to all other servers and the build process only needs to be run once. However, it is a whole different story when it comes to this kind of setup
It makes your deployment process simple, just pull new code and restart the server(s) if necessary (assuming that you have some mechanism to increase the static file version so that all your clients will receive the latest version)
Avoid unnecessary dependencies on your production servers. This might sound weird for some people but I just don't want to install any extra libraries on my production servers unless they are absolutely necessary. With the build process run locally on my dev machine, my production servers only have what they need to run the production code and nothing else
However this approach also has some disadvantages:
When more than one developer in your team (accidentally) run the build process and commit the code, then you will have a crazy list of conflicts. However, it can be solved by simply running the build process again after you merge all the changes from other guys. This is more about the workflow
Your repository will be bigger. I personally don't think this is a big issue considering few extra MB of my bundled and minified files. If your front-end javascript is big enough for this to be an issue then it is another story