Waypoint deployment through nomad getting errors HTTP: no host in request URL - node.js

I'm currently building an app by using react where I build a docker container to help with the deployment through Waypoint and Nomad. However, I'm currently getting the following issue:
! Put "http:///v1/jobs/parse": http: no Host in request URL
My docker file look like this:
FROM node:16
# Set the working directory to /app
WORKDIR /app
# Copy the package.json and package-lock.json files to the container
COPY package*.json ./
# Install the dependencies
RUN npm install
# Copy the rest of the application code to the container
COPY . .
# Expose port
EXPOSE 3000
# Specify the command to run the application
CMD [ "npm", "run", "start" ]
This is my nomad configuration file:
#example.nomad.tpl
job "web" {
datacenters = ["dc1"]
group "app" {
update {
max_parallel = 1
canary = 1
auto_revert = true
auto_promote = false
health_check = "task_states"
}
task "app" {
driver = "docker"
config {
image = "${artifact.image}:${artifact.tag}"
}
env {
%{ for k,v in entrypoint.env ~}
${k} = "${v}"
%{ endfor ~}
// For URL service
PORT = "3000"
}
}
}
}
This is my waypoint.hcl config file:
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0
project = "nomad-jobspec-nodejs"
app "nodejs-jobspec-web" {
build {
use "pack" {}
registry {
use "docker" {
image = "hvaandres/my-app-nomad"
tag = "latest"
local = true
}
}
}
deploy {
use "nomad-jobspec" {
// Templated to perhaps bring in the artifact from a previous
// build/registry, entrypoint env vars, etc.
jobspec = templatefile("${path.app}/example.nomad.tpl")
}
}
release {
use "nomad-jobspec-canary" {
groups = [
"app"
]
fail_deployment = false
}
}
}
I'm new to this tool and I wonder if anyone can redirect me in the right direction on how to solve this problem.

Related

Update source code of server app inside docker container without touching the existing saved data

I have the following Docker Compose project (files contents provided below):
.
|-- .dockerignore
|-- Dockerfile
|-- docker-compose.yml
|-- messages
|-- 20221120-010625.txt
|-- 20221120-010630.txt
`-- 20221120-010641.txt
|-- package.json
`-- server.js
When you run the Docker Compose project with the following command:
$ docker-compose up -d
you can go to the url: http://localhost/?message=<message> and record multiple messages on the server.
Here you have an example:
So far so good, but...
My use case is: Some times I need to update the source code of the website. For example, imagine I need to prefix the page text on the screenshot above: Created file ... with: ### like:
### Created file: "/var/www/html/messages/20221120-010641.txt" with content: "this is a test".
BUT I cannot mess with the existing messages because that's valuable data for the server app.
I tried with the following commands:
$ docker-compose down --volumes
$ docker-compose up -d --force-recreate --build
My problem is: after updating the source code accordingly, even though the page text got properly updated, all the messages got lost, which is not good.
Could you please indicate me how can I achieve this?
I tried by defining a named volume inside the docker-compose.yml like:
services:
serverapp:
...
volumes:
- messages:/var/www/html/messages
volumes:
messages:
... expecting that if I destroy the server app the messages persist, but that didn't work because that named volume was owned by the user root and the messages are created by the user: node which doesn't have permission to create files on that directory, which causes an error.
Here is the content of the involved files:
.dockerignore
/node_modules/
/messages/
/npm-debug.log
Dockerfile
FROM node:16-alpine
RUN mkdir -p /var/www/html && chown -R node:node /var/www/html
WORKDIR /var/www/html
COPY --chown=node:node . .
USER node
RUN npm i
EXPOSE 8080
CMD [ "npm", "run", "start" ]
# ENTRYPOINT ["tail", "-f", "/dev/null"]
docker-compose.yml
version: '3'
services:
serverapp:
image: alpine:3.14
build:
dockerfile: Dockerfile
container_name: serverapp
restart: unless-stopped
ports:
- "80:80"
package.json
{
"name": "docker-compose-tester",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "cross-env NODE_ENV=debug nodemon --exec babel-node server.js"
},
"dependencies": {
"express": "^4.16.1",
"moment": "^2.29.4"
},
"devDependencies": {
"#babel/node": "^7.20.2",
"cross-env": "^7.0.3",
"nodemon": "^2.0.20"
}
}
server.js
const express = require('express');
const moment = require('moment');
const path = require('path');
const fs = require('fs');
const PORT = 80;
const app = express();
app.use('/messages/', express.static(path.join(__dirname, 'messages')));
app.get('/', (req, res) => {
const message = req.query.message;
if (!message) {
return res.send('<pre>Please use a query like: "/?message=Hello+World"</pre>');
}
const dirPathMessages = path.join(__dirname, 'messages');
const date = moment(new Date()).format('YYYYMMDD-HHmmss');
const fileNameMessage = `${date}.txt`;
const filePathMessage = path.join(dirPathMessages, fileNameMessage);
fs.mkdirSync(dirPathMessages, { recursive: true });
fs.writeFileSync(filePathMessage, message);
const filesList = fs.readdirSync(dirPathMessages);
const filesListStr = filesList.reduce((output, fileNameMessage) => {
const filePathMessage = path.join(dirPathMessages, fileNameMessage);
const message = fs.readFileSync(filePathMessage);
return output + `<div>/messages/${fileNameMessage} -> ${message}</div>` + "\n";
}, '');
res.send(`<pre>${filesListStr}\nCreated file: "${filePathMessage}" with content: "${message}".</pre>`);
});
app.listen(PORT, () => {
console.log(`TCP Server is running on port: ${PORT}`);
});
Your approach with named volume is correct. To fix the permission problem, change the owner of the messages folder in the Dockerfile, before switching to the node user.
FROM node:16-alpine
RUN mkdir -p /var/www/html && chown -R node:node /var/www/html
WORKDIR /var/www/html
COPY --chown=node:node . .
RUN mkdir -p messages && chown node:node messages
USER node
...

Vite: Could not resolve entry module (index.html)

I am new to Openshift 3.11 deployment, I created a Multistage Dockerfile for a React application, the build want correctly on my local machine, but when I run on the openshift cluster I get the error below:
> kncare-ui#0.1.0 build
> tsc && vite build
vite v2.9.9 building for production...
✓ 0 modules transformed.
Could not resolve entry module (index.html).
error during build:
Error: Could not resolve entry module (index.html).
at error (/app/node_modules/rollup/dist/shared/rollup.js:198:30)
at ModuleLoader.loadEntryModule (/app/node_modules/rollup/dist/shared/rollup.js:22680:20)
at async Promise.all (index 0)
error: build error: running 'npm run build' failed with exit code 1
and this is my Dockefile
FROM node:16.14.2-alpine as build-stage
RUN mkdir -p /app/
WORKDIR /app/
RUN chmod -R 777 /app/
COPY package*.json /app/
COPY tsconfig.json /app/
COPY tsconfig.node.json /app/
RUN npm ci
COPY ./ /app/
RUN npm run build
FROM nginxinc/nginx-unprivileged
#FROM bitnami/nginx:latest
COPY --from=build-stage /app/dist/ /usr/share/nginx/html
#CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["nginx", "-g", "daemon off;"]
EXPOSE 80
Vite by default uses an html page as an entry point. So you either need to create one or if you don't have an html page you can use it in "Library Mode".
https://vitejs.dev/guide/build.html#library-mode
From the docs:
// vite.config.js
const path = require('path')
const { defineConfig } = require('vite')
module.exports = defineConfig({
build: {
lib: {
entry: path.resolve(__dirname, 'lib/main.js'),
name: 'MyLib',
fileName: (format) => `my-lib.${format}.js`
},
rollupOptions: {
// make sure to externalize deps that shouldn't be bundled
// into your library
external: ['vue'],
output: {
// Provide global variables to use in the UMD build
// for externalized deps
globals: {
vue: 'Vue'
}
}
}
}
})
If you're using ES Modules (i.e., import sytax):
Look in your package.json to confirm type field is set to module:
// vite.config.js
import * as path from 'path';
import { defineConfig } from "vite";
const config = defineConfig({
build: {
lib: {
entry: path.resolve(__dirname, 'lib/main.js'),
name: 'MyLib',
fileName: (format) => `my-lib.${format}.js`
},
rollupOptions: {
// make sure to externalize deps that shouldn't be bundled
// into your library
external: ['vue'],
output: {
// Provide global variables to use in the UMD build
// for externalized deps
globals: {
vue: 'Vue'
}
}
}
}
})
export default config;
Had same issue because of .dockerignore. Make sure your index.html not ignored.
In case if you ignoring everything (**) you can add !index.html to the next line and try.

Azure function HTTP Request 404 when published to Azure through Docker and Visual Studio

I'm attempting to learn some more about Azure Functions 2.0 and Docker containers to publish to my Azure instance. I followed to tutorial below with the only difference being that I published with docker to a container registry in azure using visual studio 2019.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-your-first-function-visual-studio
This all worked correctly and I was able to start my container and visit the site. However, in the example you can visit /api/function1 and get a response. This works on my localhost but on the live site it returns a 404. It seems that /api/function1 is not reachable after being published.
The app itself returns this when visiting the IP itself so I know it is working. Do I need to do something else in Azure to expose my APIs?
My container log only shows this.
Hosting environment: Production
Content root path: C:\
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
I grabbed my dockerfile from here
https://github.com/Azure/azure-functions-docker/blob/master/host/2.0/nanoserver-1809/Dockerfile
# escape=`
# Installer image
FROM mcr.microsoft.com/windows/servercore:1809 AS installer-env
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
# Retrieve .NET Core SDK
ENV DOTNET_SDK_VERSION 2.2.402
RUN Invoke-WebRequest -OutFile dotnet.zip https://dotnetcli.blob.core.windows.net/dotnet/Sdk/$Env:DOTNET_SDK_VERSION/dotnet-sdk-$Env:DOTNET_SDK_VERSION-win-x64.zip; `
$dotnet_sha512 = '0fa3bf476b560c8fc70749df37a41580f5b97334b7a1f19d66e32096d055043f4d7ad2828f994306e0a24c62a3030358bcc4579d2d8d439d90f36fecfb2666f6'; `
if ((Get-FileHash dotnet.zip -Algorithm sha512).Hash -ne $dotnet_sha512) { `
Write-Host 'CHECKSUM VERIFICATION FAILED!'; `
exit 1; `
}; `
`
Expand-Archive dotnet.zip -DestinationPath dotnet; `
Remove-Item -Force dotnet.zip
ENV ASPNETCORE_URLS=http://+:80 `
DOTNET_RUNNING_IN_CONTAINER=true `
DOTNET_USE_POLLING_FILE_WATCHER=true `
NUGET_XMLDOC_MODE=skip `
PublishWithAspNetCoreTargetManifest=false `
HOST_COMMIT=69f124faed40d20d9d8e5b8d51f305d249b21512 `
BUILD_NUMBER=12858
RUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `
Invoke-WebRequest -OutFile host.zip https://github.com/Azure/azure-functions-host/archive/$Env:HOST_COMMIT.zip; `
Expand-Archive host.zip .; `
cd azure-functions-host-$Env:HOST_COMMIT; `
/dotnet/dotnet publish /p:BuildNumber=$Env:BUILD_NUMBER /p:CommitHash=$Env:HOST_COMMIT src\WebJobs.Script.WebHost\WebJobs.Script.WebHost.csproj --output C:\runtime
# Runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2.7-nanoserver-1809
COPY --from=installer-env ["C:\\runtime", "C:\\runtime"]
ENV AzureWebJobsScriptRoot=C:\approot `
WEBSITE_HOSTNAME=localhost:80
CMD ["dotnet", "C:\\runtime\\Microsoft.Azure.WebJobs.Script.WebHost.dll"]
Here's my function1 code for my azure function
public static class Function1
{
[FunctionName("Function1")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string productid = req.Query["productid"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
productid = productid ?? data?.product;
Product newProduct = new Product()
{
ProductNumber = 0,
ProductName = "Unknown",
ProductCost = 0
};
if (Convert.ToInt32(productid) ==1)
{
newProduct = new Product()
{
ProductCost = 100,
ProductName = "Lime Tree",
ProductNumber = 1
};
}
else if(Convert.ToInt32(productid) == 2)
{
newProduct = new Product()
{
ProductCost = 500,
ProductName = "Lemon Tree",
ProductNumber = 2
};
}
return productid != null
? (ActionResult)new JsonResult(newProduct)
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Here's a photo of my container running with my image.
I'm new to this so any advice would be helpful for sure!
Thanks!
First, I don't know if you really need to (or want to) run Functions on Windows containers. If you want to run in a container, I would probably opt for Linux. For that, this is an example Dockerfile. It does build on top of the Microsoft-provided base image. So you don't have to build that from scratch.
I'm sure there is also a base image for Windows that is already build. If you need it, just look around in the similiar repo I guess.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
COPY . ./
RUN dotnet publish myfunction -c Release -o myfunction /out
FROM mcr.microsoft.com/azure-functions/dotnet:3.0 AS base
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/ao-backendfunctions/out .
ENV AzureWebJobsScriptRoot=/app
ENV AzureFunctionsJobHost__Logging__Console__IsEnabled=true
The important part is RUN dotnet publish myfunction -c Release -o myfunction /out. Replace myfunction with the (folder) name of your actual Function.
#silent's answer was correct - Linux containers are the way to go for Azure Functions. My environment wasn't set up correctly for Linux containers but once I got a correct environment this worked out of the box.
Here's my latest DockerFile for another project that uses Linux Containers
See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.0 AS build
WORKDIR /src
COPY ["FunctionTestAppLinux/FunctionTestAppLinux.csproj", "FunctionTestAppLinux/"]
RUN dotnet restore "FunctionTestAppLinux/FunctionTestAppLinux.csproj"
COPY . .
WORKDIR "/src/FunctionTestAppLinux"
RUN dotnet build "FunctionTestAppLinux.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "FunctionTestAppLinux.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/app

NodeJs Jenkins plug-in is not working with dockerfile agent

I'm trying to use NodeJs plug-in on Jenkins. I follow NodeJs document and it work fine with its example code which is using agent any
pipeline {
agent any
stages {
stage('Build') {
steps {
nodejs(nodeJSInstallationName: 'NodeJs test') {
sh 'npm config ls'
}
}
}
}
}
But if I use dockerfile agent like the code below
pipeline {
options {
timeout(time:1,unit:'HOURS')
}
environment {
docker_image_name = "myapp-test"
HTTP_PROXY = "${params.HTTP_PROXY}"
JENKINS_USER_ID = "${params.JENKINS_USER_ID}"
JENKINS_GROUP_ID = "${params.JENKINS_GROUP_ID}"
}
agent {
dockerfile {
additionalBuildArgs '--tag myapp-test --build-arg "JENKINS_USER_ID=${JENKINS_USER_ID}" --build-arg "JENKINS_GROUP_ID=${JENKINS_GROUP_ID}" --build-arg "http_proxy=${HTTP_PROXY}" --build-arg "https_proxy=${HTTP_PROXY}"'
filename 'Dockerfile'
dir '.'
label env.docker_image_name
}
}
stages {
stage('Build') {
steps {
nodejs(nodeJSInstallationName: 'NodeJs test') {
sh 'npm config ls'
}
}
}
}
}
It will return npm: command not found error.
My guess is, It couldn't find the path of nodejs... I want to try to export PATH=$PATH:?? too but I also don't know the nodejs path.
How can I make the NodeJS plug-in work with dockerfile?
NodeJS plugin won't inject itself into a docker. However you could make an ARG build argument in your dockerfile that takes the version of nodeJS to install. You will then need to get read of the nodejs step
Thank you fredericrous for the answer. Unfortunately in my system, the dockerfile can't be modified. But from your information that
NodeJS plugin won't inject itself into a docker.
I decide to run the NodeJS plugin in another agent instead of dockerfile(running multiple agents)
With the code below I manage to run it successfully.
pipeline {
options {
timeout(time:1,unit:'HOURS')
}
environment {
docker_image_name = "myapp-test"
HTTP_PROXY = "${params.HTTP_PROXY}"
JENKINS_USER_ID = "${params.JENKINS_USER_ID}"
JENKINS_GROUP_ID = "${params.JENKINS_GROUP_ID}"
}
agent {
dockerfile {
additionalBuildArgs '--tag myapp-test --build-arg "JENKINS_USER_ID=${JENKINS_USER_ID}" --build-arg "JENKINS_GROUP_ID=${JENKINS_GROUP_ID}" --build-arg "http_proxy=${HTTP_PROXY}" --build-arg "https_proxy=${HTTP_PROXY}"'
filename 'Dockerfile'
dir '.'
label env.docker_image_name
}
}
stages {
stage('Build') {
steps {
sh 'ls'
}
}
}
}
stage('Test'){
node('master'){
checkout scm
try{
nodejs(nodeJSInstallationName: 'NodeJs test') {
sh 'npm config ls'
}
}
finally {
sh 'echo done'
}
}
}

How to define port dynamically in proxy.conf.json or proxy.conf.js

I have angular4 application, and for development purposes i start it with npm run start with start defined as "start": "ng serve --proxy=proxy.conf.json". Also I have
{
"/api/**": {
"target": "http://localhost:8080/api/"
}
}
defined in proxy.conf.json.
Is there a way, to define port or whole url dynamically. Like npm run start --port=8099?
PS
http://localhost:8080/api/ is URL of my backend API.
I also did not find way to interpolate .json file but you can use .js file
e.g.
proxy.conf.js
var process = require("process");
var BACKEND_PORT = process.env.BACKEND_PORT || 8080;
const PROXY_CONFIG = [
{
context: [
"/api/**"
],
target: "http://localhost:"+BACKEND_PORT+"/api/
},
];
module.exports = PROXY_CONFIG;
add new command to package.json (notice .js not .json)
"startOn8090": "BACKEND_PORT=8090 && ng serve --proxy=proxy.conf.js"
or simply set env variable in your shell script and call npm run-script start
In .angular.cli.json
....
"defaults": {
"serve": {
"port": 9000
}
}
....
makes the proxy available on port 9000 rather than the default 4200.

Resources