Digital Ocean Apps can't locate chromium directory for puppeteer - node.js

I've been wrestling with this all day. I get the below error whenever I try to run my puppeteer script on Digital Ocean Apps.
/workspace/node_modules/puppeteer/.local-chromium/linux-818858/chrome-linux/chrome: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory
I did some research and found this helpful link below that seems to be the accepted solution.
https://github.com/puppeteer/puppeteer/issues/5661
However since I'm using digital ocean apps instead of the standard digital ocean droplets it seems I can't use sudo commands in the command line so I can't actually run these in the command line.
Does anyone know of anyway around this?
I'm using node.js by the way.
Thank you!!!!

I've struggled with this issue myself. I needed a way to run puppeteer in order to run Scully (static site generator for Angular) during my build process.
My solution was to use a Dockerfile to build my DigitalOcean App Platform's app. You can ignore the Scully stuff if not needed (Putting it here for others who struggles with this use case like me) and have a regular yarn build:prod or such, and use something like dist/myapp (instead of dist/scully) as output_dir in the appspec.
In your app's root folder, you should have these files:
Dockerfile
FROM node:14-alpine
RUN apk add --no-cache \
chromium \
ca-certificates
# This is to prevent the build from getting stuck on "Taking snapshot of full filesystem"
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
WORKDIR /usr/src/myapp
COPY package.json yarn.lock ./
RUN yarn
COPY . .
# Needed because we set PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
ENV SCULLY_PUPPETEER_EXECUTABLE_PATH /usr/bin/chromium-browser
# build_prod_scully is set in package.json to: "ng build --configuration=production && yarn scully --scanRoutes"
RUN yarn build_prod_scully
digitalocean.appspec.yml
domains:
- domain: myapp.com
type: PRIMARY
- domain: www.myapp.com
type: ALIAS
name: myapp
region: fra
static_sites:
- catchall_document: index.html
github:
branch: master
deploy_on_push: true
repo: myname/myapp
name: myapp
output_dir: /usr/src/myapp/dist/scully # <--- /user/src/myapp refering to the Dockerfile's WORKDIR, dist/scully refers to scully config file's `outDir`
routes:
- path: /
source_dir: /
dockerfile_path: Dockerfile # <-- may be unneeded unless you already have an app set up.
Then upload the digitalocean.appspec.yml from your computer to your
App Dasboard > Settings > App > AppSpec > edit > Upload File.
For those who use it with Scully, I've also used the following inside scully.config.ts
export const config: ScullyConfig = {
...
puppeteerLaunchOptions: {
args: ['--no-sandbox', '--disable-setuid--sandbox'],
},
};
Hope it helps.

Related

Elastalert2 rules folder config not working

I'm using Elastalert2 now to get notifications from error log in slack.
We need to receive alarms of all service logs through our dozens of rules.
Docker builds ElastAlert2 and deploy it on Argocd.
But, there is a problem that the rules_folder config does not work
There is rules_folder in config.yaml
rules_folder: /home/elastalert/rules
and this is Example Dockerfile
FROM python:3.9.13-slim
# installation
RUN pip3 install --upgrade pip \
&& pip3 install cryptography elastalert2
ENV LANG="en_US.UTF-8"
# add configuration and alarm
RUN mkdir -p /home/elastalert
WORKDIR /home/elastalert
ADD ./config.yaml /home/elastalert
COPY ./rules /home/elastalert/rules
and this is run command
command: [ "/bin/sh", "-c" ]
args:
- >-
echo "Finda Elastalert is started!!" &&
elastalert-create-index &&
elastalert --verbose --config config.yaml
...
but error occur like...
[error][1]
I think the rule files cannot be imported as args.
In other words, it seems that rules_folder does not apply
If, specify a specific rule file in the start command, it works well.
For example,
elastalert --verbose --config config.yaml --rule ./rules/example/example.yaml
However, it can only execute one rule.
We have dozens of rules.
What's the problem?
Solve.
Don't store empty yaml in your rules/ sub.
The problem was that I commented out all the yaml files except the test rule yaml for the operation test.
By replacing the commented yaml file with another extension such as .text.
Now elastalert recognizes and operates all rules.

Running Nuxt.js in Docker Container build by Paketo.io / Cloud Native Buildpacks

I want to Containerize my Nuxt.js application. I could write my own Dockerfile (as mentioned in the Nuxt.js Google Cloud Run docs for example), but as Cloud Native Buildpacks are here to free us from the need to write those over and over again I wanted to simply use Paketo.io to build a Container from my Nuxt.js app.
I ran
pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
and a Container was created successfully. Here's the full log:
$ pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
base: Pulling from paketobuildpacks/builder
Digest: sha256:3e2ee17348bd901e7e0748e0e1ddccdf8a602b624e418927145b5f84ca26f264
Status: Image is up to date for paketobuildpacks/builder:base
base-cnb: Pulling from paketobuildpacks/run
Digest: sha256:b6b1612ab2dfa294514fff2750e8d724287f81e89d5e91209dbdd562ed7f7daf
Status: Image is up to date for paketobuildpacks/run:base-cnb
===> DETECTING
4 of 7 buildpacks participating
paketo-buildpacks/ca-certificates 2.2.0
paketo-buildpacks/node-engine 0.4.0
paketo-buildpacks/npm-install 0.3.0
paketo-buildpacks/npm-start 0.2.0
===> ANALYZING
Previous image with name "microservice-ui-nuxt-js" not found
===> RESTORING
===> BUILDING
Paketo CA Certificates Buildpack 2.2.0
https://github.com/paketo-buildpacks/ca-certificates
Launch Helper: Contributing to layer
Creating /layers/paketo-buildpacks_ca-certificates/helper/exec.d/ca-certificates-helper
Paketo Node Engine Buildpack 0.4.0
Resolving Node Engine version
Candidate version sources (in priority order):
-> ""
<unknown> -> ""
Selected Node Engine version (using ): 14.17.0
Executing build process
Installing Node Engine 14.17.0
Completed in 5.795s
Configuring build environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Configuring launch environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Writing profile.d/0_memory_available.sh
Calculates available memory based on container limits at launch time.
Made available in the MEMORY_AVAILABLE environment variable.
Paketo NPM Install Buildpack 0.3.0
Resolving installation process
Process inputs:
node_modules -> "Not found"
npm-cache -> "Not found"
package-lock.json -> "Found"
Selected NPM build process: 'npm ci'
Executing build process
Running 'npm ci --unsafe-perm --cache /layers/paketo-buildpacks_npm-install/npm-cache'
Completed in 14.988s
Configuring launch environment
NPM_CONFIG_LOGLEVEL -> "error"
Configuring environment shared by build and launch
PATH -> "$PATH:/layers/paketo-buildpacks_npm-install/modules/node_modules/.bin"
Paketo NPM Start Buildpack 0.2.0
Assigning launch processes
web: nuxt start
===> EXPORTING
Adding layer 'paketo-buildpacks/ca-certificates:helper'
Adding layer 'paketo-buildpacks/node-engine:node'
Adding layer 'paketo-buildpacks/npm-install:modules'
Adding layer 'paketo-buildpacks/npm-install:npm-cache'
Adding 1/1 app layer(s)
Adding layer 'launcher'
Adding layer 'config'
Adding layer 'process-types'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
Setting default process type 'web'
Saving microservice-ui-nuxt-js...
*** Images (5eb36ba20094):
microservice-ui-nuxt-js
Adding cache layer 'paketo-buildpacks/node-engine:node'
Adding cache layer 'paketo-buildpacks/npm-install:modules'
Adding cache layer 'paketo-buildpacks/npm-install:npm-cache'
Successfully built image microservice-ui-nuxt-js
Now running
docker run --rm -i --tty -p 3000:3000 microservice-ui-nuxt-js
i hoped to see my app inside the Browser at http://localhost:3000. But no luck! My app doesn't seem to be fully running:
Although my console looks good:
What am I missing?
I read about the HOST variable in this post , which the whole problem is about! And then I also found this answer, since I now knew what to look for. The Nuxt.js configuration docs state it also:
By default, the Nuxt.js development server host is localhost which is
only accessible from within the host machine. In order to view your
app on another device you need to modify the host.
And the crucial config is mentioned also:
Host '0.0.0.0' is designated to tell Nuxt.js to resolve a host
address, which is accessible to connections outside of the host
machine (e.g. LAN)
So all we have to do is to define a Docker environment variable --env "HOST=0.0.0.0" and run the Paketo build Container like this:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 microservice-ui-nuxt-js
Now the Browser should also show our app at http://localhost:3000:
You can try it yourself using the GitHub Container Registry published image of the example project:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 ghcr.io/jonashackt/microservice-ui-nuxt-js:latest

Reading from a private Nuget feed - .NET Core 3.1 Windows Docker Container

Does anyone have any experience with developing microservices in Azure with the .NET Core 3.1 using Windows containers? I am running into an issue when
I am trying to make my Dockerfile read from a private Nuget feed. Here is my Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1809 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1809 AS build
WORKDIR /src
COPY ./Nuget.config ./
COPY ["MyService/MyService.csproj", "MyService/"]
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER=0
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\":\"my_private_feed", \"password\":\"my_personal_access_token\"}]}"
RUN dotnet restore "MyService/MyService.csproj"
COPY . .
WORKDIR "/src/MyService"
RUN dotnet build "MyService.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyService.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyService.dll"]
This dockerfile gives me a 401 Unauthenticated error even though I know the credentials I am providing are correct.
I've also tried setting the user and password in my nuget.config file and that seems to work, but I don't want to have
to make a code change to update the password each time a token expires.
Any advice on how to move forward from here? Am I just not formatting the setting of the VSS_NUGET_EXTERNAL_FEED_ENDPOINTS variable properly?
I ran into the same problem in the past. I stopped using VSS_NUGET_EXTERNAL_FEED_ENDPOINTS and created the nuget.config on the fly
First step: Create small helper executable
namespace DockerBuild
{
class Program
{
static void Main(string[] args)
{
if (args.Count() != 4)
{
Console.WriteLine("Use it: dotnet dockerbuild.dll <username> <pat> <sourceUrl> <target>");
}
var user = $"{args[0]}";
var pat = $"{args[1]}";
var source = $"{args[2]}";
var target = $"{args[3]}";
var xml = $"<?xml version=\"1.0\" encoding=\"utf-8\"?><configuration><packageSources><add key=\"YourFeedName\" value=\"{source}\" /></packageSources><packageSourceCredentials><YourFeedName><add key=\"Username\" value=\"{user}\" /><add key=\"ClearTextPassword\" value=\"{pat}\"/></YourFeedName></packageSourceCredentials></configuration>";
using (var file = new StreamWriter(target))
{
file.WriteLine(xml);
}
}
}
}
The output of this console program will be: DockerBuild.exe
Second step: Run the executable on docker build
ENV devops_user=$user
ENV devops_pat=$pat
ENV devops_nuget_source=$nuget_source
RUN DockerBuild.exe %devops_user% %devops_pat% %devops_nuget_source% nuget.config
Now you have a nuget.config file in your folder structure and you only need to use (or copy) it and run your dotnet build/publish command(s)
Attention:
This example was done on a windows machine with running docker for windows. On linux the call to DockerBuild.exe must be changed to use the environment variables without % and use linux equivalent.
Your docker image probably doesn't have the Azure Artifacts NuGet credential provider installed. Its repo readme has a link to a sample dockerfile, which includes this line:
# download and install latest credential provider. Not required after https://github.com/dotnet/dotnet-docker/issues/878
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
Since your dockerfile is using a Windows base image, not Linux, you'll need to adapt it, but conceptually it's the same issue, the credential provider isn't installed (or NuGet doesn't know how to find it), so NuGet doesn't know how to authenticate to your feed.
edit: there's an issue asking how to install on a Windows docker image: https://github.com/microsoft/artifacts-credprovider/issues/169

Kubernetes fails to deploy valid container image

I have a docker image containing an NodeJS app. The Dockerfile is:
FROM node:8
WORKDIR /app
ADD . /app
RUN npm install
EXPOSE 80
ENTRYPOINT [ "/bin/sh", "./start.sh" ]
The start.sh script is:
#!/bin/bash
...
echo "Starting application"
npm start
I'm able to launch and test the image manually:
$ gcloud docker -- run -it --rm my-container
...
Starting application
...
> node index.js
...
The same container is used by a kubernetes deployment:
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- image: my-container
...
The container starts, the start.sh script is correctly executed but it terminates and the container goes into a CrashLoopBackOff loop.
After inspecting the pod manually:
kubectl exec -ti my-pod -- bash
I have no name!#my-pod:/app# cat /etc/passwd
... empty response
-> It appears that somehow there are no system users on the container, which makes most commands (like npm) fail silently and terminate the container
I have also tried, without success:
deleting the pod
deleting and re-creating the deployment
running the node image with the node user -> unable to find user node: no matching entries in passwd file
Last note: I actually have many deployments (using the same template with just a different name) which are running fine with an image that was built a few days ago with the same source code.
For some deployments, it actually worked after manually deleting the pod and letting kubernetes recreate it.
Any ideas?
Edit 18/01/2018 I have tried rebuilding an image with the same source code that old working images use, without success. I have also tried a simpler Dockerfile:
FROM node:8
USER node
But I still get an error related to the fact that no users seem to be there:
Error response from daemon: {"message":"linux spec user: unable to find user node: no matching entries in passwd file"}
I have checked with the docker-node guys, the image hasn't changed recently. Could it be related to kubernetes changes? Keep it mind that my images do run when I run them manually with the docker command.
I tried to reproduce your issue, but didn't get it to fail in anything like the same fashion. I made a dummy express app and stuck it on github that matches your example above, and then invoked it into a local minikube instance I had. The base image size is reasonably large, but it started up just fine.
I had to interpret what was happening within npm start for your example since you didn't specify, but you can see my package.json, which I suspect is pretty close to what you're doing based on the description.
When I fire this up:
git clone https://github.com/heckj/dummyexpress
cd dummyexpress
kubectl apply -f deploy/
The I got a running instance right off the bat:
NAME READY STATUS RESTARTS AGE
dummynodeapp-7788b95497-tkw2s 1/1 Running 0 1d
And the logs show pretty much what you'd expect:
**kubectl log dummynodeapp-7788b95497-tkw2s**
W0117 19:41:00.986498 20648 cmd.go:353] log is DEPRECATED and will be removed in a future version. Use logs instead.
Starting application
> blah#1.0.0 start /app
> node index.js
Example app listening on port 3000!
My guess is that you've got something going awry within your npm start execution, so I'd recommend fiddling with that aspect of your deployment and see if you can't resolve it that way.
Well as #heckj pointed out, it was a Docker issue on my kubernetes cluster. I updated the cluster from 1.6.13-gke.1 to v1.7.12-gke.0 and the pods worked fine again. I'm not sure what Docker version was used since there's another kubernetes bug that is preventing me from seeing it.

Docker compose volume mapping with NodeJS app

I am trying to achieve something incredibly basic, but have been going at this for a couple of evenings now and still haven't found a solid (or any) solution. I have found some similar topics on SO and followed what was on there but to no avail, so I have created a GitHub repo for my specific case.
What I'm trying to do:
Be able to provision NodeJS app using docker-compose up -d (I plan to add further containers in future, omitted from this example)
Ensure the code is mapped via volumes so I don't have to re-build every time I make a change to some code locally.
My guess is the issue I'm encountering is something to do with the mapping of volumes causing some files to be lost/overwritten within the container, for instance in some of the variations I've tried the folders are being mapped but individual files are not.
I've created a simple repo to illustrate my issue, just checkout and run docker-compose up -d to see the issue, the container dies due to:
Error: Cannot find module '/src/app/app.js'
The link to the repo is here: https://github.com/josephmcdermott/nodejs-docker-issue, PR's welcome and if anybody can solve this for me I'd be eternally grateful.
UPDATE: please see the solution code below, kind thanks to ldg
Dockerfile
FROM node:4.4.7
RUN mkdir -p /src
COPY . /src
WORKDIR /src
RUN npm install
EXPOSE 3000
CMD ["node", "/src/app.js"]
docker-compose.yml
app:
build: .
volumes:
- ./app:/src/app
Folder Structure:
- app
- - * (files I want to sync and regularly update)
- app.js (initial script to call files within app/)
- Dockerfile
- docker-compose.yml
- package.json
In your compose file, the last line - /src/app/node_modules is likely mapping over your previous volume. If you mount /scr/app then node_modules will get created in that linked volume. So it would look like this:
app:
build: .
volumes:
- ./app:/src/app
If you do want to keep your entire /app directory as a linked volume, you'll need to either do npm install when starting the container (which would insure it picks up any updates) OR don't link the volume and update your Dockerfile to copy the entire /app directory. This is nice because it gives you a self-contained image. I usually Dockerize my Node.js apps this way. You can also run npm test as appropriate to verify the image.
If you need to create a linked volume for a script file you want to be able to edit (or if your app generates side-effects), you can link just that directory or file via Docker volumes.
Btw, if you want to make sure you don't copy the contents of that directory in the future, add it to .dockerignore (as well as .gitignore).
Notice the '/' at the end
volumes:
- ./app:/src/app/
This declaration is not correct
volumes:
- ./app:/src/app

Resources