Deploying Strapi V4 Serverless via Docker ECR container - node.js

I've been following this blog post:
But with a slight twist as following the above hits the 250mb limit AWS Lambda limit (no idea how the original author managed to do without).
So my alternative was to try using ECR/Docker to deploy instead.
my docker image is building and deploying to serverless fine, however when I try to access the application on a route i.e. /dev/api/users my serverless log keeps throwing this error:
INFO Cold starting Strapi
/bin/sh: hostname: command not found
[2022-10-03 17:43:49.672] debug: ⛔️ Server wasn't able to start properly.
[2022-10-03 17:43:49.674] error: The public folder (/var/task/public) doesn't exist or is not accessible. Please make sure it exists.
Error: The public folder (/var/task/public) doesn't exist or is not accessible. Please make sure it exists.
at module.exports (/var/task/node_modules/#strapi/strapi/lib/core/bootstrap.js:29:11)
at async Strapi.register (/var/task/node_modules/#strapi/strapi/lib/Strapi.js:380:5)
at async Strapi.load (/var/task/node_modules/#strapi/strapi/lib/Strapi.js:474:5)
at async startStrapi (/var/task/app.js:16:7)
at async Runtime.module.exports.strapiHandler [as handler] (/var/task/app.js:36:5)
My Dockerfile image:
FROM amazon/aws-lambda-nodejs:16
WORKDIR /var/task
RUN npm i python3
COPY src /var/task
COPY build /var/task
COPY config /var/task
COPY database /var/task
COPY public /var/task
COPY package*.json /var/task
COPY .env.example /var/task
RUN npm install
RUN chmod -R 777 /var/task
CMD [ "app.strapiHandler" ]
Handler function in app.js
module.exports.strapiHandler = async (event, context) => {
let workingDir = process.cwd();
if (process.env.LAMBDA_TASK_ROOT && process.env.IS_OFFLINE !== "true") {
workingDir = process.env.LAMBDA_TASK_ROOT;
}
if (!global.strapi) {
console.info("Cold starting Strapi");
Strapi({ dir: workingDir });
}
if (!global.strapi.isLoaded) {
await startStrapi(global.strapi);
}
const handler = serverless(global.strapi.server.app);
return handler(event, context);
};
I am copying my public directory to the /var/task folder. But the error keeps persisting. Feel like I've hit a wall and it's business critical I get this solved.
Any insight would be appreciated.
NOTE: the chmod -R 777 is temporary whilst I try and fix this. Figured it may have been a permission issue.
EDIT:
Managed to solve the above. If anyone wants to know how please reach out happy to help anyone on a similar path with Serverless and headless CMS.
New issue:
error: Forbidden access
ForbiddenError: Forbidden access
at Object.verify (/var/task/node_modules/#strapi/plugin-users-permissions/server/strategies/users-permissions.js:94:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /var/task/node_modules/#strapi/strapi/lib/services/server/compose-endpoint.js:31:5
at async serve (/var/task/node_modules/koa-static/index.js:59:5)
at async returnBodyMiddleware (/var/task/node_modules/#strapi/strapi/lib/services/server/compose-endpoint.js:52:18)
at async policiesMiddleware (/var/task/node_modules/#strapi/strapi/lib/services/server/policy.js:24:5)
at async /var/task/node_modules/#strapi/strapi/lib/middlewares/body.js:58:9
at async /var/task/node_modules/#strapi/strapi/lib/middlewares/logger.js:25:5
at async /var/task/node_modules/#strapi/strapi/lib/middlewares/powered-by.js:16:5
at async cors (/var/task/node_modules/#koa/cors/index.js:61:32)
I'm a super user so not letting me access my own /api/users route.
EDIT 2:
Also managed to solve the Forbidden access issue.
My latest issue is that when I go to content-builder-plugin in the admin is saying that it needs autoReload to be enabled. Anyone know how to config so that this is enabled?

Forbidden access issue was solved by having the following Dockerfile structure - I figured out in the end the directories need to be mapped explicitly to their Docker container folders for the admin to work properly - here is my Dockerfile example - I also had to add a few npm install commands in order to get it running.
FROM amazon/aws-lambda-nodejs:16
WORKDIR /var/task
RUN npm i python3
COPY src /var/task/src
COPY build /var/task/build
COPY config /var/task/config
COPY database /var/task/database
COPY public /var/task/public
COPY package*.json /var/task
COPY .env.example /var/task
RUN npm install
RUN npm install --platform=linux --arch=x64 sharp
RUN chmod -R 755 /var/task
CMD [ "/var/task/src/app.strapiHandler" ]
Any questions let me know! There are also some serverless.yml tweaks and strapi security config required in order to get images and media uploading/working properly.

Related

How can I execute a command inside a docker from a node app?

I have a node app running, and I need to access a command that lives in an alpine docker image.
Do I have to use exec inside of javascript?
How can I install latex on an alpine container and use it from a node app?
I pulled an alpine docker image, started it and installed latex.
Now I have a docker container running on my host. I want to access this latex compiler from inside my node app (dockerized or not) and be able to compile *.tex files into *.pdf
If I sh into the alpine image I can compile '.tex into *.pdf just fine, but how can I access this software from outside the container e.g. a node app?
If you just want to run the LaTeX engine over files that you have in your local container filesystem, you should install it directly in your image and run it as an ordinary subprocess.
For example, this Javascript code will run in any environment that has LaTeX installed locally, Docker or otherwise:
const { execFileSync } = require('node:child_process');
const { mkdtemp, open } = require('node:fs/promises');
const tmpdir = await mkdtemp('/tmp/latex-');
let input;
try {
input = await open(tmpdir + '/input.tex', 'w');
await input.write('\\begin{document}\n...\n\\end{document}\n');
} finally {
input?.close();
}
execFileSync('pdflatex', ['input'], { cwd: tmpdir, stdio: 'inherit' });
// produces tmpdir + '/input.pdf'
In a Docker context, you'd have to make sure LaTeX is installed in the same image as your Node application. You mention using an Alpine-based LaTeX setup, so you could
FROM node:lts-alpine
RUN apk add texlive-full # or maybe a smaller subset
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY ./ ./
CMD ["node", "main.js"]
You should not try to directly run commands in other Docker containers. There are several aspects of this that are tricky, including security concerns and managing the input and output files. If it's possible to directly invoke a command in a new or existing container, it's also very straightforward to use that permission to compromise the entire host.

MeteorUp volumes and how Meteor can access to their contents

First, thank you for reading my question. This is my first time on stackoverflow and I made a lot of research for answers that could help me.
CONTEXT
I'm developing a Meteor App that is used as a CMS, I create contents and store datas in mongoDb collections. The goal is to use these datas and a React project to build a static website, which is sent to an AWS S3 bucket for hosting purpose.
I'm using meteorUp to deploy my Meteor App (on an AWS EC2 instance) and according to MeteorUp documentation (http://meteor-up.com/docs.html#volumes), I added a docker volume in my mup.js:
module.exports = {
...
meteor: {
...
volumes: {
'/opt/front': '/front'
},
...
},
...
};
Once deployed, volume is well set in '/opt/myproject/config/start.sh':
sudo docker run \
-d \
--restart=always \
$VOLUME \
\
--expose=3000 \
\
--hostname="$HOSTNAME-$APPNAME" \
--env-file=$ENV_FILE \
\
--log-opt max-size=100m --log-opt max-file=10 \
-v /opt/front:/front \
--memory-reservation 600M \
\
--name=$APPNAME \
$IMAGE
echo "Ran abernix/meteord:node-8.4.0-base"
# When using a private docker registry, the cleanup run in
# Prepare Bundle is only done on one server, so we also
# cleanup here so the other servers don't run out of disk space
if [[ $VOLUME == "" ]]; then
# The app starts much faster when prepare bundle is enabled,
# so we do not need to wait as long
sleep 3s
else
sleep 15s
fi
On my EC2, '/opt/front' contains the React project used to generate a static website.
This folder includes a package.json file, every modules are available in the 'node_modules' directory. 'react-scripts' is one of them, and package.json contains the following script line:
"build": "react-scripts build",
React Project
React App is fed with a JSON file available in 'opt/front/src/assets/datas/publish.json'.
This JSON file can be hand-written (so the project can be developed independently) or generated by my Meteor App.
Meteor App
Client-side, on the User Interface, we have a 'Publish' button that the Administrator can click when she/he wants to generate the static website (using CMS datas) and deploy it to the S3 bucket.
It calls a Meteor method (server-side)
Its action is separated in 3 steps:
1. Collect every useful datas and save them into a Publish collection
2. JSON creation
a. Get Public collection first entry into a javascript object.
b. Write a JSON file using that object in the React Project directory ('opt/front/src/assets/datas/publish.json').
Here's the code:
import fs from 'fs';
let publishDatas = Publish.find({}, {sort : { createdAt : -1}}).fetch();
let jsonDatasString = JSON.stringify(publishDatas[0]);
fs.writeFile('/front/src/assets/datas/publish.json', jsonDatasString, 'utf8', function (err) {
if (err) {
return console.log(err);
}
});
2. Static Website build
a. Run a CD command to reach React Project's directory then run the 'build' script using this code:
process_exec_sync = function (command) {
// Load future from fibers
var Future = Npm.require("fibers/future");
// Load exec
var child = Npm.require("child_process");
// Create new future
var future = new Future();
// Run command synchronous
child.exec(command, {maxBuffer: 1024 * 10000}, function(error, stdout, stderr) {
// return an onbject to identify error and success
var result = {};
// test for error
if (error) {
result.error = error;
}
// return stdout
result.stdout = stdout;
future.return(result);
});
// wait for future
return future.wait();
}
var build = process_exec_sync('(cd front && npm run build)');
b. if 'build' is OK, then I send the 'front/build' content to my S3 bucket.
Behaviors:
On local environment (Meteor running on development mode):
FYI: React Project directory's name and location are slightly different.
Its located in my meteor project directory, so instead of 'front', it's named '.#front' because I don't want Meteor to restart every time a file is modified, added or deleted.
Everything works well, but I'm fully aware that I'm in development mode and I benefit from my local environment.
On production environment (Meteor running on production mode in a docker container):
Step 2.b : It works well, I can see the new generated file in 'opt/front/src/assets/datas/'
Step 3.a : I get the following error:
"Error running ls: Command failed: (cd /front && npm run build)
(node:39) ExperimentalWarning: The WHATWG Encoding Standard
implementation is an experimental API. It should not yet be used in
production applications.
npm ERR! code ELIFECYCLE npm ERR! errno 1 npm
ERR! front#0.1.0 build: react-scripts build npm ERR! Exit status 1
npm ERR! npm ERR! Failed at the front#0.1.0 build script. npm ERR!
This is probably not a problem with npm. There is likely additional
logging output above.
npm ERR! A complete log of this run can be found in: npm ERR!
/root/.npm/_logs/2021-09-16T13_55_24_043Z-debug.log [exec-fail]"
So here's my question:
On production mode, is it possible to use Meteor to reach another directory and run a script from a package.json?
I've been searching for an answer for months, and can't find a similar or nearby case.
Am I doing something wrong?
Am I using a wrong approach?
Am I crazy? :D
Thank you so much to have read until the end.
Thank you for your answers!
!!!!! UPDATE !!!!!
I found the solution!
In fact I had to check few things on my EC2 with ssh:
once connected, I had to go to '/opt/front/' and try to build the React-app with 'npm run build'
I had a first error because of CHMOD not set to 777 on that directory (noob!)
then, I had an error because of node-sass.
The reason is that my docker is using Node v8, and my EC2 is using Node v16.
I had to install NVM and use a Node v8, then delete my React-App node_modules (and package-lock.json) then reinstall it.
Once it was done, everything worked perfectly!
I now have a Meteor App acting as a CMS / Preview website hosted on an EC2 instance that can publish a static website on a S3 bucket.
Thank you for reading me!
!!!!! UPDATE !!!!!
I found the solution!
In fact I had to check few things on my EC2 with ssh:
once connected, I had to go to '/opt/front/' and try to build the React-app with 'npm run build'
I had a first error because of CHMOD not set to 777 on that directory (noob!)
then, I had an error because of node-sass.
The reason is that my docker is using Node v8, and my EC2 is using Node v16.
I had to install NVM and use a Node v8, then delete my React-App node_modules (and package-lock.json) then reinstall it.
Once it was done, everything worked perfectly!
I now have a Meteor App acting as a CMS / Preview website hosted on an EC2 instance that can publish a static website on a S3 bucket.
Thank you for reading me!

ENV variables within cloud run server are no accessible

So,
I am using NUXT
I am deploying to google cloud run
I am using dotenv package with a .env file on development and it works fine.
I use the command process.env.VARIABLE_NAME within my dev server on Nuxt and it works great, I make sure that the .env is in git ignore so that it doesnt get uploaded.
However, I then deploy my application using the google cloud run... I make sure I go to the Enviroments tab and add in exactly the same variables that are within the .env file.
However, the variables are coming back as "UNDEFINED".
I have tried all sorts of ways of fixing this, but the only way I can is to upload my .env with the project - which I do not wish to do as NUXT exposes this file in the client side js.
Anyone come across this issue and know how to sort it out?
DOCKERFILE:
# base node image
FROM node:10
WORKDIR /user/src/app
ENV PORT 8080
ENV HOST 0.0.0.0
COPY package*.json ./
RUN npm install
# Copy local nuxt code to the container
COPY . .
# Build production app
RUN npm run build
# Start the service
CMD npm start
Kind Regards,
Josh
Finally I found a solution.
I was using Nuxt v1.11.x
From version equal to or greater than 1.13, Nuxt comes with Runtime Configurations, and this is what you need.
in your nuxt.config.js:
export default {
publicRuntimeConfig: {
BASE_URL: 'some'
},
privateRuntimeConfig: {
TOKEN: 'some'
}
}
then, you can access like:
this.$config.BASE_URL || context.$config.TOKEN
More details here
To insert value to the environment variables is not required to do it in the Dockerfile. You can do it through the command line at the deployment time.
For example here is the Dockerfile that I used.
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm","start"]
this is the app.js file
const express = require('express')
const app = express()
const port = 8080
app.get('/',(req,res) => {
const envtest = process.env.ENV_TEST;
res.json({message: 'Hello world',
envtest});
});
app.listen(port, () => console.log(`Example app listening on port ${port}`))
To deploy use a script like this:
gcloud run deploy [SERVICE] --image gcr.io/[PROJECT-ID]/[IMAGE] --update-env-vars ENV_TEST=TESTVARIABLE
And the output will be like the following:
{"message":"Hello world","envtest":"TESTVARIABLE"}
You can check more detail on the official documentation:
https://cloud.google.com/run/docs/configuring/environment-variables#command-line

NodeJS App path broken in Docker container

I got this strange issue when trying to run my NodeJS App inside a Docker Container.
All the path are broken
e.g. :
const myLocalLib = require('./lib/locallib.');
Results in Error:
Cannot find module './lib/locallib'
All the file are here (show by ls command inside the lib direcotry)
I'm new to Docker so I may have missed something in my setup
There's my Dockerfile
FROM node:latest
COPY out/* out/
COPY src/.env.example src/
COPY package.json .
RUN yarn
ENTRYPOINT yarn start
Per request : File structure
Thank you.
You are using the COPY command wrong. It should be:
COPY out out

Swagger express gives error inside docker container

I have an app that uses the swagger-express-mw library and I start my app the following way:
SwaggerExpress.create({ appRoot: __dirname }, (err, swaggerExpress) => {
// initialize application, connect to db etc...
}
Everything works fine on my local OSX machine. But when I use boot2docker to build an image from my app and run it, I get the following error:
/usr/src/app/node_modules/swagger-node-runner/index.js:154
config.swagger = _.defaults(readEnvConfig(),
^
TypeError: Cannot assign to read only property 'swagger' of [object Object]
My dockerfile looks like this (nothing special):
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm set progress=false
RUN npm install
COPY . /usr/src/app
EXPOSE 5000
CMD ["npm", "run", "dev"]
Has anyone else encountered similar situations where the local environment worked but the app threw an error inside the docker container?
Thanks!
Your issue doesn't appear to be something wrong with your docker container or machine setup. The error is not a docker error, that is a JavaScript error.
The docker container appears to be running your JavaScript module in Strict Mode, in which you cannot assign to read-only object properties (https://msdn.microsoft.com/library/br230269%28v=vs.94%29.aspx). On your OSX host, from the limited information we have, it looks like it is not running in strict mode.
There are several ways to specify the "strictness" of your scripts. I've seen that you can start Node.js with a flag --use_strict, but I'm not sure how dependable that is. It could be that NPM installed a different version of your dependent modules, and in the different version they specify different rules for strict mode. There are several other ways to define the "strictness" of your function, module, or app as a whole.
You can test for strict mode by using suggestions here: Is there any way to check if strict mode is enforced?.
So, in summary: your issue is not inheritantly a docker issue, but it is an issue with the fact that your javascript environments are running in different strict modes. How you fix that will depend on where strict is being defined.
Hope that helps!

Resources