Ignore updates to files in image only after building with Skaffold for the first time? - skaffold

I have a deployment that includes an Angular project. To test my Angular project, I use ng serve which hosts its own fast-updating server I can connect to. For development, I save and edit these files very frequently. Because this image is used in my Skaffold deployment, that means Skaffold is constantly rebuilding and compiling my angular project.
For times that I'm not actively updating the Angular side of things, I'd like it to just build the most recent files and then never check for updates on them again until I run skaffold dev again.
Currently the skaffold.yaml looks like this for the Angular image:
- image: angular
context: ../Images/angular
custom:
dependencies:
ignore: ['../Images/angular']
This successfully runs the angular image once and then never checks for updates to it again, but if I make changes to it, stop Skaffold, then run skaffold dev again, it doesn't rebuild the image.
I understand why it's doing this, it makes sense and is expected. But I'm wondering if there's a better way to handle the building of images for images that change rapidly while developing them, such as Vue/Angular/React. Or maybe there's a better way to define files as being ignored from changes during skaffold dev rather than all changes even between runs.
The main reason I'd like to stop the constant building it to save laptop battery.

Skaffold supports other trigger modes including a manual trigger mode:
skaffold dev --trigger=manual
Skaffold will then wait for you to hit enter to initiate the next dev rebuild.
An alternative is to use the Skaffold Control API to toggle the auto-building. For example, you can use the REST API to turn off the auto-build:
curl -X PUT -d '{"enabled":false}' localhost:50052/v1/build/auto_execute
You can then re-enable it as you desire. You can use the Control API to toggle auto-deploy instead if you want the images to be built but not deployed (e.g., to see build-time errors), and you can also manually trigger builds/deploys/sync too.
You can find out the control port by running skaffold dev -v info; the gRPC port is normally 50051 and the REST port is normally 50052:
$ skaffold dev -v info
INFO[0000] starting gRPC server on port 50051
INFO[0000] starting gRPC HTTP server on port 50052
INFO[0000] Skaffold &{Version:v1.23.0 ConfigVersion:skaffold/v2beta15 GitVersion: GitCommit:e8f3c652112c338e75e03497bc8ab09b9081142d BuildDate:2021-04-28T00:55:12Z GoVersion:go1.14.14 Compiler:gc Platform:darwin/amd64}
...
Or you can explicitly configure a port:
skaffold dev --rpc-http-port=50099
For older versions of Skaffold, you may need to explicitly turn on the RPC mode with --enable-rpc.

Related

Nuxt3 + Vite - devserver restarts when editing any file

I have a Nuxt3 (nuxt#3.0.0-rc.8) app using Vite (#nuxt/vite-builder#3.0.0-rc.8). I am developing from within a docker .devcontainer.
When I run yarn dev the devserver spins up and listens on port 3000 and everything works great. However when I change any file (e.g. a .vue component) I see the change hot-module-reloaded into my browser instantly, which is the behaviour I want.
BUT the devserver also sees the change and restarts itself.
I have a server route setup:
./server
routes/
my-route.ts
which does an expensive operation when first run, and caches the result for subsequent requests. Since the devserver restarts on every change, this expensive operation happens on every change, making dev quite slow.
So every change to a clientside .vue file results in the following server log:
✔ Vite server built in 397ms
Starting expensive operation from my-route...
...done in 123 second
How can I make it so only changes inside the ./server/ folder, or to nuxt.config.ts, or to plugins/*.server.ts etc will cause the devserver to restart? And other changes to clientside components will leave the server running and use HMR?

How to use a docker image to generate static files to serve from nginx image?

I'm either missing something really obvious or I'm approaching this totally the wrong way, either way I could use some fresh insights.
I have the following docker images (simplified) that I link together using docker-compose:
frontend (a Vue.js app)
backend (Django app)
nginx
postgres
In development, I don't use nginx but instead the Vue.js app runs as a watcher with yarn serve and Django uses manage.py runserver.
What I would like to do for production (in CI/CD):
build and push backend image
build and push nginx image
build the frontend image with yarn build command
get the generated files in the nginx container (through a volume?)
deploy the new images
The problem is: if I put yarn build as the CMD in the Dockerfile, the compilation happens when the container is started, and I want it to be done in the build step in CI/CD.
But if I put RUN yarn build in the image, what do I put as CMD? And how do I get the generated static files to nginx?
The solutions I could find use multistage builds for the frontend that have an nginx image as the last step, combining the two. But I need the nginx image to be independent of the frontend image, so that doesn't work for me.
I feel like this is a problem that has been solved by many, yet I cannot find an example. Suggestions are much appreciated!
Create a volume in your docker-compose.yml file and mount the same volume to both your frontend container (to a path where the built files are, like dist folder) and nginx container (to your web root path). This way both containers will have the access to same volume.
And also, keep your yarn build as RUN command.
EDIT:
Containers need to run a program or command in order to be classified as a container, and so they could be started, stopped etc. That is by design.
If you are not planning to serve from frontend container using a command, then you should either remove it as a service (since its not) from docker-compose.yml and add it as a build stage in your nginx dockerfile, or you could use some kind of command that will run indefinitely in your frontend container, for example tail -f index.html. The first solution is a better practice.
In your nginx dockerfile add frontend build dockerfile as a first build stage:
From: node as frontend-build
WORKDIR /app
RUN yarn build
From:nginx
COPY --from=frontend-build /app/dist /usr/shared/nginx/html
...
CMD ["nginx"]

Docker never runs on Azure - Waiting for response to warmup request for container

I'm trying to deploy a dockerized app on Azure's App Service. I enter all the fields correctly, my image gets pulled, put I keep getting this error until something times out.
Waiting for response to warmup request for container -<container name > Elapsed time = 154.673506 sec
I did set WEBSITE_PORT 8080 (used by my app)
Here is the dockerfile
FROM google/dart
WORKDIR /app
ADD pubspec.* /app/
RUN pub get --no-precompile
ADD . /app/
RUN pub get --offline --no-precompile
WORKDIR /app
EXPOSE 8080
ENTRYPOINT ["pub", "run", "aqueduct:aqueduct", "serve", "--port", "8080"]
It was working fine. I had it working last night. I wanted to refresh my image so I restarted. Nothing worked. After multiple hours I deleted my app and started again... no luck. Any ideas?
EDIT 1:
Tried changing port to 80, no luck (This was the port I was using at first when it was working fine)
RESOLVED (Partially)*
I changed everything to port 8000. I realized that Linux and windows did not like having something non-system listening on 80. Therefore I changed everything on 8000 and set the system properties on Azure {WEBSITE_PORT, 8000}. IT now seems to work fine. I don't know if this is an official fix... But it does warmup after 30-ish seconds
You can also try setting WEBSITES_CONTAINER_START_TIME_LIMIT to 1800
Depending which App Service plan you have, if there is an option ‘always on’, try to set ‘always on’ in the configuration of your app in Azure portal.
If you are using a Premium App service plan, you can set pre-warm number of instances. Try to set that to 2-3 and see if it gets any better..here
I had the same experience as you, but my container was really big since it contained ML model, so at the end I switched to AKS because it performed better..
what actually worked for me was a combination of the answers above by Ethiene and kgalic, setting all ports to 8000 in the docker file
EXPOSE 8000
CMD gunicorn -w 4 -b :8000 app:app
in the azure configuration application settings adding
"WEBSITES_PORT" : "8000"
in the azure configuration general settings setting
"Always on" : "on"
App Service - Docker container deploy
In my case, this slowdown was caused by automatic port detection.
Setting the WEBSITES_PORT in the application setting solved the problem.
WEBSITES_PORT=8000
Pay attention if you have more slots (production/staging?), you have to set this env variable in the other slots too.
From: Azure App Service on Linux FAQ - Custom Contaniers
We have automatic port detection. You can also specify an app setting called WEBSITES_PORT and give it the value of the expected port number. Previously, the platform used the PORT app setting. We are planning to deprecate this app setting and to use WEBSITES_PORT exclusively.
I had this same problem when I used the nodejs application, so I did build the dist folder by npm build on the creation of the docker image, so it is part of the docker image rather than the docker cmd creating the build image on the initial execution of the app. Maybe the RAM and CPU wasn't enough for the npm build to happen at the initial runtime

JHipster + Angular + MongoDB + Docker: beginner question

I would like to have some guidance about what is supposed to be the best development workflow with JHipster.
What I did expect:
With one docker-compose command, I could up and run everything the project needs (in this case, MongoDB, Kafka, backend, etc.);
When modifying front-end, saving the modified files, could fire livesync (ng serve --watch?).
What I did find:
The one command option that I found (docker-compose -f src/main/docker/app.yml up -d), which I guess that depends of a ./mvnw package -Pprod verify jib:dockerBuild before, does not livesync and seems that is not compatible with the individual execution of front-end with npm run start - application started this way points to different backend's modules ports (?).
I have experience with Angular and MongoDB (and a little with Docker), but I'm super new to JHipster and am trying to understand what I am doing wrong.
Thanks in advance!
For development workflow, you should start the dependencies individually. The app.yml will start the app's Docker image with the prod profile, useful for testing locally before deploying.
Start Containers for Mongo and Kafka
docker-compose -f src/main/docker/mongodb.yml up -d
docker-compose -f src/main/docker/kafka.yml up -d
Start the backend
./mvnw
Start frontend live-reload
npm start
If Docker is not accessible on localhost, you may need to configure application-dev.yml to point to the Docker IP.

Creating (2) dockers for a react app w/ a node front end. How do I adjust the proxy settings to prevent breaking the app if it's run natively?

I have a client/server react+node.js app. The front end communicates with the API via the proxy in packages.json.
"proxy": "http://localhost:5000/"
I can get both the client and API to come up by running them in two separate docker containers via docker-compose. This allows an alias to be used in place of localhost:
"proxy": "http://server:5000/"
That fixes docker -- but breaks it if the app is going to be run natively outside of docker. It cannot resolve server to localhost (or an IP.)
Is there a way for the app to detect if it's being run in a docker and use another proxy? Or a way for it to fail-over to a second proxy if the first one times out?
If you are running the webpack build in your docker container you can provide the proxy url by passing in an environment variable from docker to webpack using the -e flag:
docker run -e "PROXY_URL=http://server:5000/"
Then you can provide PROXY_URL to react using webpack's DefinePlugin:
plugins: [
new webpack.DefinePlugin({
PROXY_URL: JSON.stringify(process.env.PROXY_URL)
})
]
Then you can just read PROXY_URL as a variable inside your app.

Resources