Login to Flutter web app returns flutter_service_worker.js error - flutter-web

When I deploy a flutter web app on a cloud ubuntu server I can see the login screen ok but on login the flutter logs show this error.
preparing port 5000 ...
Server starting on port 5000 ...
172.19.0.14 - - [18/Jun/2022 11:02:06] "GET / HTTP/1.1" 304 -
172.19.0.14 - - [18/Jun/2022 11:02:07] "GET /flutter_service_worker.js?v=1208050259 HTTP/1.1" 304 -
The go api returns this error
"OPTIONS http://api.mydomain.com/login HTTP/1.1" from 172.19.0.14:59572 - 405 0B in 36.92µs
If I run my app locally as a linux client against the same api it logs in fine so the issue is with the web browser I guess. I get the same result from either Firefox or Chrome.
Here is my flutter Dockerfile:
FROM ubuntu:22.04
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Australia/Adelaide
RUN apt-get update
RUN apt-get install -y apt-utils psmisc
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3
RUN apt-get clean
# download Flutter SDK from Flutter Github repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter environment path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Run flutter doctor
RUN flutter doctor
# Enable flutter web
RUN flutter channel master
RUN flutter upgrade
RUN flutter config --enable-web
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN flutter build web
# Record the exposed port
EXPOSE 5000
# make server startup script executable and start the web server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]
Here is my server.sh script
#!/bin/bash
PORT=5000
echo 'preparing port' $PORT '...'
fuser -k 5000/tcp
cd build/web/
echo 'Server starting on port' $PORT '...'
python3 -m http.server $PORT
edit: I use the chi router so I implemented the following fix to my api back end and it now works.
func router() http.Handler {
r := chi.NewRouter()
r.Use(middleware.RequestID)
r.Use(middleware.Logger)
r.Use(middleware.Recoverer)
r.Use(middleware.URLFormat)
r.Use(render.SetContentType(render.ContentTypeJSON))
r.Use(cors.Handler(cors.Options{
AllowedOrigins: []string{"https://*", "http://*"},
// AllowOriginFunc: func(r *http.Request, origin string) bool { return true },
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
AllowedHeaders: []string{"Accept", "Authorization", "Content-Type", "X-CSRF-Token"},
ExposedHeaders: []string{"Link"},
AllowCredentials: false,
MaxAge: 300, // Maximum value not ignored by any of major browsers
}))
The error still exists in my flutter logs and now I'm having issues invoking saveFile.js
Server starting on port 5000 ...
172.19.0.6 - - [12/Sep/2022 13:37:48] "GET / HTTP/1.1" 200 -
172.19.0.6 - - [12/Sep/2022 13:37:48] "GET /flutter_service_worker.js?v=2601178962 HTTP/1.1" 200 -
172.19.0.6 - - [12/Sep/2022 13:37:48] "GET /main.dart.js HTTP/1.1" 200 -
172.19.0.6 - - [12/Sep/2022 13:37:48] "GET /index.html HTTP/1.1" 200 -
172.19.0.6 - - [12/Sep/2022 13:37:48] "GET /assets/AssetManifest.json HTTP/1.1" 200 -
172.19.0.6 - - [12/Sep/2022 13:37:48] "GET /assets/FontManifest.json HTTP/1.1" 200 -
172.19.0.6 - - [12/Sep/2022 13:37:49] "GET /flutter_service_worker.js?v=2601178962 HTTP/1.1" 304 -

When you run with your "linux client", this "linux client" probably doesn't care about CORS, but the browser does.
And from the error message 405 - Method not allowed it seems, that your backend behind https://api.mydomain.com is not able to handle an OPTIONS request sent by the browser. You need to add the appropriate CORS handling to that backend, ie
add a handler for the OPTIONS request
add the needed CORS headers (see docs linked above) to the answer
Depending on your sever framework, this may also be just a setting or some readily available middleware.

Related

Cors error between front-and backend in a docker container

I have a problem when trying to connect my vue.js frontend and express.js backend.
I get the following cors error when trying to make the following api call: api:5000/subscriptions/product, if I add http:// in front of it, it doesn't find the host
Access to XMLHttpRequest at 'api:5000/subscriptions/product' from origin
'http://localhost:3000' has been blocked by CORS policy: Cross origin requests
are only supported for protocol schemes:
http, data, chrome, chrome-extension, brave, chrome-untrusted, https
I run my containers in the follwing docker-compose file:
version: "3.7"
services:
db:
image: rafaelbackx.azurecr.io/resto-database
restart: always
environment:
MYSQL_ROOT_PASSWORD: ...
ports:
- "3306:3306"
volumes:
- "databasevolume:/var/lib/mysql"
api:
image: rafaelbackx.azurecr.io/resto-api
depends_on:
- db
restart: always
environment:
STRIPE_SECRET_KEY: ...
PORT: 5000
HOST: db
USER: root
DATABASE: resto
PASSWORD: ...
FRONT_END_URL: localhost:3000
ADMIN_EMAIL: ...
ports:
- "5000:5000"
frontend:
image: rafaelbackx.azurecr.io/resto-frontend
restart: always
depends_on:
- api
environment:
VITE_API_URL: api:5000
port: 3000
ports:
- "3000:3000"
volumes:
databasevolume:
You can ignore the front_end_url field in the api service (it is used for a link in an email)
Because apparently you cannot use environment variables in vue.js application through docker I had to manually replace the VITE_API_URL in all the js files (I found this solution by googling environment variables in vue.js, and apparently this is done frequently) however since I use the same method to let the backend communicate with the database and it doesnt work for the frontend I assume the error comes from the manual replacement.
this is the dockerfile for my frontend:
from node:16
ENV port=8080
copy package*.json ./
run npm install
run npm install -g http-server
copy . .
run npm run build
# Copy entrypoint script as /entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# Grant Linux permissions and run entrypoint script
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
# cmd http-server -p $port dist/
and this is the entrypoint.sh script:
note that I do put " around the URL, this is because otherwise javascript would complain that the syntax was incorrect as it thought it was a variable.
#!/bin/sh
ROOT_DIR=./dist
ls $ROOT_DIR
URL="'$VITE_API_URL'"
echo $URL
echo "Replacing env constants in JS"
for file in $(find $ROOT_DIR -type f -name \*.js);
do
echo "Processing $file ...";
sed -i "s|VITE_API_URL|$URL|g" $file
done
http-server -p $port dist/
The cors problem only appeared when I started with docker, before everything worked
import express, { Request, Response, Application } from "express"
import { config } from 'dotenv'
import cors from 'cors'
...
const app : Application = express()
...
app.use(cors())
backend docker file:
from node:16
copy package*.json ./
run npm install
copy . ./
cmd npm start
UPDATE:
I launched an interactive shell to the frontend container and used curl to replicate the http request made and it worked flawless
I used the following command: docker exec -ti bd080341212d /bin/bash to get the shell (after docker-compose up, the id is the container-id from the frontend container) and with curl I got the expected result
Thanks in advance for any help

Dockerize react app with mongo db docker-compose conection problem json error

I'm trying to dockerize a mongodb -nodejs- react app and I can't make the production build work. The developer mode though is running fine And the individual production builds of Api and Ui work fine as well. The problem is probably at the nginx cause the ui can't post to the server when I try the log in mechanism I've built for example I get:
SyntaxError: Unexpected token < in JSON at position 0. and at my terminal : 172.20.0.1 - - [18/Dec/2021:13:25:49 +0000] "POST /login HTTP/1.1" 405 559 "http://localhost/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"
Ui Dockerfile:
# build environment
FROM node:14.18.1 as build
WORKDIR /app
COPY package*.json ./
COPY . .
RUN npm ci
RUN npm run build
# production environment
FROM nginx:1.21.4-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD [ "nginx","-g", "daemon off;"]
Api Dockerfile:
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# Copy app source code
COPY . .
#Expose port and start application
EXPOSE 8080
CMD [ "npm", "start" ]
nginx.config
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
docker-compose.yml
version: "3"
services:
api:
build: ./api
ports:
- "8080:8080"
depends_on:
- mongo
ui:
build: ./my-app
ports:
- "80:80"
depends_on:
- api
mongo:
image: mongo
ports:
- "27017:27017"
If you want nginx to serve the API as well, lookup how to use proxy_pass in nginx or search "reverse proxy nginx". Currently, Nginx is only serving HTML pages, so any request to it will not return JSON, as the error says. Otherwise, your error indicates you've made a request to localhost/login or just /login, which is not the correct address for your api container

CORS "Access-Control-Allow-Credentials", "Access-Control-Allow-Origin" response headers appear for local but not docker container

can anyone help me solve this issue?
I'm running a php Laravel backend with fortify and sanctum for auth. I also have a React frontend which makes api calls to this backend, both of these are dockerized. If I run the php server locally, there are no issues with CORs. But once the backend is running via a docker container, the following issue pops up:
CORS Issue
Access to XMLHttpRequest at 'http://localhost:8000/api/tickets' from origin 'http://localhost' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
On the network tab, we see that the Request URL doesn't have the following fields in the Response Headers:
"Access-Control-Allow-Credentials", "Access-Control-Allow-Origin".
Is there a way to have two docker containers allow CORs?
What I have tried:
Setting up an NGINX container to act as a reverse proxy, including addition of fields in:
location / {
resolver 127.0.0.11;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Accept,Content-Type';
set $nextjs nextjs_upstream;
proxy_pass http://$nextjs;
}
Added all the required CORS fields in my config files for the server i.e. cors.php file.
<?php
return [
'paths' => ['*', 'api/*', 'login', 'logout', 'sanctum/csrf-cookie', 'api/tickets'],
'allowed_methods' => ['*'],
'allowed_origins' => ['http://localhost','*'],
'allowed_origins_patterns' => [],
'allowed_headers' => ['X-Custom-Header', 'Upgrade-Insecure-Requests','*'],
'exposed_headers' => [],
'max_age' => 0,
'supports_credentials' => true,
];
Note: This attempt specifically might make sense since localhost in "allowed_origins" refers to that of within the container, but how would we change it to be outside the container? Would it be that container's IP address or something?
Added additional installations in php Dockerfile: "&& a2enmod headers \ && sed -ri -e 's/^([ \t]*)(<\/VirtualHost>)/\1\tHeader set Access-Control-Allow-Origin "*"\n\1\2/g' /etc/apache2/sites-available/*.conf"
Interestingly, my login/logout routes don't face the same CORS error which plagues my route: api/tickets, but this is besides the point since It seems like more of an issue with docker containers attempting to communicate.
Note: localhost has no port since NGINX serves the yarn/npm build.
Laravel Dockerfile:
# The base image
FROM php:8.0.2 as base
# Install system dependencies i.e. Zip , curl:
RUN apt-get update && apt-get install -y \
zip \
unzip
# Get extensions for PHP & Install Postgres
RUN apt-get install -y libpq-dev \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo pdo_pgsql pgsql
# Get the latest composer:
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Copy everything to the working directory
COPY . /app
# Set the current working directory
WORKDIR /app
# Install dependencies for laravel
COPY composer.json ./
RUN composer install
# Move the exec file to our workdir and give permissions
COPY docker-init.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-init.sh
# Expose internal port which will be mapped
EXPOSE 8000
# Run our server
ENTRYPOINT [ "docker-init.sh" ]
Frontend Dockerfile:
FROM node:alpine as build
# Set the current working directory
WORKDIR /usr/app
# Install PM2 globally
RUN npm install --global pm2
# Get all dependencies
COPY package*.json ./
# Copy everything to the working directory
RUN npm install --force --production
RUN yarn add --dev typescript #types/react
COPY . ./
RUN npm run build
# Expose internal port which will be mapped
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "npm", "--", "start" ]
Nginx Dockerfile:
# Prod environment
FROM nginx:alpine
# Remove any existing config files
RUN rm /etc/nginx/conf.d/*
# Copy config files
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker-init.sh
#!/bin/bash
# Exit immediately if a pipeline returns a non zero status
set -e
# Change to our working directory
cd /app
# php artisan migrate
# Serve for all IPv4 addresses:
php artisan serve --host=0.0.0.0 --port=8000
Thanks for the help lads
Solved it, had to make a change to the app/Http/kernel.php file. Basically in the lines, I had to make 'web' and 'api' the same:
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
// \Illuminate\Session\Middleware\AuthenticateSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
'api' => [
EnsureFrontendRequestsAreStateful::class,
'throttle:api',
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
];
Becomes:
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
// \Illuminate\Session\Middleware\AuthenticateSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
'api' => [
EnsureFrontendRequestsAreStateful::class,
'throttle:api',
\Illuminate\Routing\Middleware\SubstituteBindings::class,
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
];

Config Dockerfile for running cronjob

I'm new in Docker and I'm facing a problem with my custom Dockerfile which needs some help from you guys. It's working fine until I add some code to run the cronjob in the docker container.
This is my Dockerfile file:
FROM php:7.2-fpm-alpine
COPY cronjobs /etc/crontabs/root
// old commands
ENTRYPOINT ["crond", "-f", "-d", "8"]
This is cronjobs file:
* * * * * cd /var/www/html && php artisan schedule:run >> /dev/null 2>&1
This is docker-compose.yml file:
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx_ctrade
ports:
- "8081:80"
volumes:
- ./app:/var/www/html
- ./config/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./config/certs:/etc/nginx/certs
- ./log/nginx:/var/log/nginx
depends_on:
- php
- mysql
networks:
- laravel
working_dir: /var/www/html
php:
build:
context: ./build
dockerfile: php.dockerfile
container_name: php_ctrade
volumes:
- ./app:/var/www/html
- ./config/php/php.ini:/usr/local/etc/php/php.ini
networks:
- laravel
mysql:
image: mysql:latest
container_name: mysql_ctrade
tty: true
volumes:
- ./data:/var/lib/mysql
- ./config/mysql/my.cnf:/etc/mysql/my.cnf
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=admin
- MYSQL_DATABASE=laravel
- MYSQL_PASSWORD=secret
networks:
- laravel
I re-build the docker images and run it. The cronjob is working ok but when I access the localhost at localhost:8081. It isn't working anymore. The page show 502 Bad Gateway, so I checked the Nginx error log. This is the Nginx error shown me:
2020/04/10 13:33:36 [error] 8#8: *28 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.224.1, server: localhost, request: "GET /trades HTTP/1.1", upstream: "fastcgi://192.168.224.3:9000", host: "localhost:8081", referrer: "http://localhost:8081/home"
All the containers are still running after updated.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2403ece8509 nginx:stable-alpine "nginx -g 'daemon of…" 18 seconds ago Up 17 seconds 0.0.0.0:8081->80/tcp nginx_ctrade
69032097b7e4 ctrade_php "docker-php-entrypoi…" 19 seconds ago Up 18 seconds 9000/tcp php_ctrade
592b483305d5 mysql:latest "docker-entrypoint.s…" 3 hours ago Up 18 seconds 3306/tcp, 33060/tcp mysql_ctrade
Is there someone get this issue before? Any help would be appreciated! Thanks so much!
According to the documentation, running two (or more) services inside of a Docker container breaks it's philosophy of single responsability.
It is generally recommended that you separate areas of concern by
using one service per container. That service may fork into multiple
processes (for example, Apache web server starts multiple worker
processes). It’s ok to have multiple processes, but to get the most
benefit out of Docker, avoid one container being responsible for
multiple aspects of your overall application. [...]
If you choose to follow this recommendation, you will end up with two options:
Option 1. Create a separated container that will handle the scheduling tasks.
Example:
# File: Dockerfile
FROM php:7.4.8-fpm-alpine
COPY ./cron.d/tasks /cron-tasks
RUN touch /var/log/cron.log
RUN chown www-data:www-data /var/log/cron.log
RUN /usr/bin/crontab -u www-data /cron-tasks
CMD ["crond", "-f", "-l", "8"]
# File: cron.d/tasks
* * * * * echo "Cron is working :D" >> /var/log/cron.log 2>&1
# File: docker-compose.yml
services:
[...]
scheduling:
build:
context: ./build
dockerfile: cron.dockerfile
[...]
Option 2. Use own host's crontab to execute the scheduled tasks on containers (as defended in this post).
Example:
# File on host: /etc/cron.d/my-laravel-apps
* * * * * root docker exec -t laravel-container-A php artisan schedule:run >> /dev/null 2>&1
* * * * * root docker exec -t laravel-container-B php artisan schedule:run >> /dev/null 2>&1
* * * * * root docker exec -t laravel-container-C php artisan schedule:run >> /dev/null 2>&1
PS: In your case, replace <laravel-container-*> by php_ctrade.
Option 3: Use supervisord
On the other hand, if you really want just one container at all, you may still use supervisord as your main process and configure it to initialize (and supervise) both php-fpm and crontab applications.
Note that this is a moderately heavy-weight approach and requires you
to package supervisord and its configuration in your image (or base
your image on one that includes supervisord), along with the different
applications it manages.
You will find an example of how to do it here.
References:
https://docs.docker.com/config/containers/multi-service_container/
Recommended reading:
https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container

AWS Codedeploy timesout on ApplicationStart when deploying a node.js / express server

I have run into a problem when trying to setup an AWS Codepipline. My ApplicationStart script makes a call to start the express server listening on port 60900, but because the express.listen() holds the command line up while it listens the ApplicationStart script times out and my deployment fails.
I've tried moving it to a background process with an & at the end of the command that starts the server, but I'm still getting the error at the ApplicationStart hook.
When I run the my start_server.sh script manually it almost instantly starts the server up and give me back control of the command line.
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /var/www/mbmbam.app/
hooks:
BeforeInstall:
- location: scripts/stop_server.sh
timeout: 300
runas: root
- location: scripts/remove_previous.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/change_permissions.sh
timeout: 300
runas: root
- location: scripts/install_app.sh
timeout: 300
runas: root
- location: scripts/install_db.sh
timeout: 300
runas: root
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: ubuntu
scripts/start_server.sh
#!/usr/bin/env bash
NODE_ENV=production npm start --prefix /var/www/mbmbam.app/app
Script assigned to the npm start command
app/start_app.sh
#!/bin/sh
if [ "$NODE_ENV" = "production" ]; then
node server.js &
else
nodemon --ignore './sessions' server.js;
fi
AWS Codedeploy error
LifecycleEvent - ApplicationStart
Script - scripts/start_server.sh
[stdout]
[stdout]> mbmbam-search-app#1.0.0 start /var/www/mbmbam.app/app
[stdout]> ./start_app.sh
[stdout]
Any help would be appreciated. I've been stuck on this for a day or so.
I solved it by changing the start_app.sh to
#!/bin/sh
if [ "$NODE_ENV" = "production" ]; then
node server.js > app.out.log 2> app.err.log < /dev/null &
else
nodemon --ignore './sessions' server.js;
fi
Looks like it AWS even listed it in their troubleshooting steps here:
https://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-deployments.html#troubleshooting-long-running-processes
The issue seems to be node not going cleanly in the backgroud.
Can you try the following way to start node server in 'app/start_app.sh':
$ nohup node server.js > /dev/null 2>&1 &
Also I would suggest to look at making your node process a service so it is started if the server is rebooted:
https://stackoverflow.com/a/29042953/12072431

Resources