How to enable HTTP/2 support in CloudFront using serverless? - amazon-cloudfront

How to enable HTTP/2 support in CloudFront using serverless?
I am using serverless (https://github.com/serverless/serverless) to create a simple serverless application that serves HTML + JS from S3 container using CloudFront.
To this end, I am writing a serverless.yml template that will afterwards build this setup for me.
What do I need to include/configure in the template in order to enable HTTP/2 support in CloudFront?

This can be achieved by adding HttpVersion: 'http2' below DistributionConfig: property. See full example below.
## Specifying the CloudFront Distribution to server your Web Application
WebAppCloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
HttpVersion: 'http2'
Origins:
- DomainName: ${self:custom.s3Bucket}.s3.amazonaws.com
## An identifier for the origin which must be unique within the distribution
Id: WebApp
CustomOriginConfig:
HTTPPort: 80
HTTPSPort: 443
OriginProtocolPolicy: https-only
## In case you want to restrict the bucket access use S3OriginConfig and remove CustomOriginConfig
# S3OriginConfig:
# OriginAccessIdentity: origin-access-identity/cloudfront/E127EXAMPLE51Z
Enabled: 'true'
## Uncomment the following section in case you are using a custom domain
# Aliases:
# - mysite.example.com
DefaultRootObject: index.html
## Since the Single Page App is taking care of the routing we need to make sure ever path is served with index.html
## The only exception are files that actually exist e.h. app.js, reset.css

Related

Why does an anonymous httptrigger azure function throw a 500 internal server error when 'code' is a param in query string?

I have a Function App that is running in a container in Kubernetes. One of my endpoints is an httptrigger with anonymous access. However the query string contains a parameter code (supplied by a 3rd party vendor with no control over its name) that causes the app to throw a 500 error with no log indicating what happened. The odd part is if I deploy the same function to an Azure Function App everything works as expected. So my question is what configuration or environment variables need to be set in order for this to behave correctly?
Related to this as a follow up question - Azure Function running in AKS throws 500 on query string parameter for http trigger function
The issue turned out that the runtime tries to write files to the azure-functions-host/Secrets directory for anonymous functions where code is a parameter in the query string. Due to the way Kubernetes mounts volumes for secrets when it creates the directory it sets the permissions in a read only fasion even if readonly is false.
As a work-around I ended up creating the directory in the docker file
# To enable ssh & remote debugging on app service change the base image to the one below
# FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
FROM mcr.microsoft.com/azure-functions/dotnet:3.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
FUNCTIONS_WORKER_RUNTIME=dotnet
EXPOSE 80 443
RUN mkdir azure-functions-host/Secrets
COPY . /home/site/wwwroot
In the kubernetes deployment file I mounted the specific file to that directory so that the mount action did not mess with the directory permissions.
volumeMounts:
- name: functionhostkeys-store
mountPath: "/azure-functions-host/Secrets/host.json"
subPath: "host.json"
readOnly: false
This approach allowed the runtime to still write to that directory as needed but allowed me to manage my function keys in Azure KeyVault and mount them at runtime in a known configuration.

Cannot access RabbitMQ UI on docker container

I'm currently working on a project where I have a virtual machine on Microsoft Azure and I'm trying to have multiple Docker containers to be accessed through different routes with the help of a Traefik reverse proxy. Besides the reverse-proxy, the first service I need to have is RabbitMQ and I should be able to access its user-interface on a /rmq route. Right now, I have the following docker-compose file to build both services:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
And this is the content of my traefik.toml file:
logLevel = "DEBUG"
debug = true
[api]
dashboard = true
insecure = false
debug = true
[providers.docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[log]
level = "DEBUG"
format = "json"
The enabled_plugins file specifies which RabbitMQ plugins should be activated. Here, I have the rabbitmq_management plugin (among others), which I think is needed to access the RabbitMQ UI. I even checked the logs of the RabbitMQ container and apparently the rabbitmq_management was properly started:
rabbitmq_1 | 2021-01-30 15:50:30.538 [info] <0.730.0> Server startup complete; 7 plugins started.
rabbitmq_1 | * rabbitmq_stomp
rabbitmq_1 | * rabbitmq_federation_management
rabbitmq_1 | * rabbitmq_mqtt
rabbitmq_1 | * rabbitmq_federation
rabbitmq_1 | * rabbitmq_management
rabbitmq_1 | * rabbitmq_web_dispatch
rabbitmq_1 | * rabbitmq_management_agent
rabbitmq_1 | completed with 7 plugins.
rabbitmq_1 | 2021-01-30 15:50:30.539 [info] <0.730.0> Resetting node maintenance status
With these configurations running with docker-compose up, if I try to access HOST.com/rmq, I get a 502 (Bad Gateway) error on the console of my browser. And initially, this was where I was stuck. However, after searching for some help online, I found a different way to specify the traefik port on the RabbitMQ container labels (traefik.http.services.rabbitmq.loadbalancer.server.port=15672) and, with this modification, I don't have the Bad Request error anymore, but I get a lot of ERR_ABORTED 404 (Not Found) errors on the console of my browser (the list bellow does not contain all the errors):
rmq:7 GET http://HOST.com/js/ejs-1.0.min.js net::ERR_ABORTED 404 (Not Found)
rmq:18 GET http://HOST.com/js/charts.js net::ERR_ABORTED 404 (Not Found)
rmq:19 GET http://HOST.com/js/singular/singular.js net::ERR_ABORTED 404 (Not Found)
Refused to apply style from 'http://HOST.com/css/main.css' because its MIME type ('text/plain') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
rmq:27 Uncaught ReferenceError: sync_get is not defined at rmq:27
I don't have much experience with this kind of projects and I don't know if I'm doing something wrong or if there's something missing in these configurations or on the configurations of the virtual machine itself. Do you know what I should do to be able to access the RabbitMQ UI with the URL HOST.com/rmq ?
If I get this running properly, I think I would also be able to configure Docker to only allow access to the Traefik UI with a route such as HOST.com/dashboard, instead of accessing it only with the URL without any routes.
Thanks in advance!
Solved it. I don't know why, but when I used the configuration traefik.http.services.rabbitmq.loadbalancer.server.port=15672, I changed the order of the lines traefik.http.routers.rabbitmq.middlewares=strip-docs and traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq, making the prefix appear before the middleware. Changed that and now I can access the RabbitMQ UI on HOST.com/rmq. So my final docker-compose was this:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.http.services.rabbitmq.loadbalancer.server.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
I'll mark this question as solved, but if you know why the order of these 2 lines matters, please explain for future reference.
Thanks!
Trace of how I determined an answer to suggest for this question, given that I haven't used the specific tools:
By searching rabbitmq admin url, I found the rabbitmq management docs page, which near the top, mentions support of a path prefix setting. I searched the page for that, and under the relevant heading, found that you likely will need to set this setting in your rabbitmq config:
management.path_prefix = /rmq
So, to apply it to your docker config, I looked up the rabbitmq docker image, which discusses that configuration files need to be injected via a bind mount, or can be provided via an esoteric erlang config thing which I'd personally not mess with. Therefore, the steps I'd follow from here would be:
look in the existing rabbitmq image to find out what the default config file in /etc/rabbitmq/rabbitmq.conf is, eg by running docker-compose run rabbitmq cat /etc/rabbitmq/rabbitmq.conf, or an appropriate docker cp command if it turns out rabbitmq sets a docker ENTRYPOINT which prevents use of shell commands on the image command line
add a volume just like you have with enabled plugins but move it one directory upward, mapping rabbit/ to /etc/rabbitmq/, and then put the default config from the container in rabbit/
add that line to the config file
With any luck that should at least get you closer. I'm curious to hear how it goes!
By the way, while looking at the rabbitmq docker image docs, I discovered that there are special tags for if you need management interface support. You may find that you need to switch to one of those instead of plain 3-alpine in order for this to work, eg rabbitmq:3-management-alpine.

500 HTTP Error When Deploying to DigitalOcean App Platform via Doctl

I've created a YAML spec for my DO App Platform based on the sample on this repository.
The reason I couldn't simply use the UI on the DigitalOcean website is that my project is a monorepo.
Which looks like this:
name: unique-expressions
services:
- name: api
environment_slug: node-js
github:
repo: Valencian-Digital/unique-expressions
branch: main
deploy_on_push: true
source_dir: api
routes:
- path: /api
But when I try to execute
doctl apps create --spec .do/app.yaml
It returns a 500 Error and nothing else. I've tried executing the command both in a Github action and locally with different API tokens.
I'm able to access other resources on my DO account but I can't successfully create the spec.
This is the specific error I get back from doctl:
Error: POST https://api.digitalocean.com/v2/apps: 500 Server Error
Do y'all what could be going wrong?
So I discovered what was going wrong. Essentially the branch name was wrong (I put main instead of master).
You can find more info here: https://github.com/digitalocean/doctl/issues/883.
TLDR - this is the right config:
name: unique-expressions
services:
- name: api
environment_slug: node-js
github:
repo: Valencian-Digital/unique-expressions
branch: master
deploy_on_push: true
source_dir: api
routes:
- path: /api

How can I configure IIS subfeatures in Ansible

New to Ansible I'm experimenting with setting up a website under IIS.
I can create and configure an application pool, but I'm struggling with the website. The basic site works, HTTPS/SSL is still troublesome, but I read there are some bugs in the win_iis_website/win_iis_webbinding scripts that are being worked on. The part I'm stuck with are IIS' features per site.
In IIS (in the GUI) there are sub-features that can be configured for a site:
I was unable to find how to configure these using Ansible (more specifically Ansible's win_iis_website module).
I'm looking to configure ASP, Handler mappings, URL rewrites and Default documents.
Is there any way to do so?
My current yml for creating the site looks like this:
- name: create new website {{ websitename}}
win_iis_website:
name: "{{ websitename}}"
state: started
port: 443
ip: *
ssl: true
hostname: "{{ websitename }}"
application_pool: "{{ websitename }}"
physical_path: c:\inetpub\wwwroot\{{ websitename }}
parameters: logfile.directory:c:\inetpub\logs\
register: website
I am currently making Playbooks for IIS and indeed to perform the configuration there is no particular module that allows you to modify the functions of the sections, I looked in some places and the information was very scarce, there are modules for applicationPool, but for this you have to use win_shell as follows
- name: Name of playbook
win_shell: |
<PowerShell command>
You can base on the CIS BENCHMARK guide of IIS.
Check the win_feature module:
- name: Install IIS Web-Server with sub features and management tools
win_feature:
name: Web-Server
state: present
restart: True
include_sub_features: True
include_management_tools: True
if you want to do in a more controlled manner, check the installed features with the command:
Get-WindowsFeature
And add like:
- name: Install IIS
win_feature:
name: "Web-Filtering,Web-Dir-Browsing,Web-Default-Doc"
state: present
restart: no
include_sub_features: no
include_management_tools: yes

How to send node.js logs to Cloudwatch Logs from Elastic Beanstalk Docker application?

Amazon offers these readymade files for sending Tomcat/Apache/nginx logs to Cloudwatch Logs, which work great.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
However for my purposes they only send nginx logs, which isn't really sufficient and unfortunately they also provide zero documentation on the file format. What I'm trying to achieve is to send node.js logs from my Docker application to Cloudwatch (since autoscaling makes instances come and go).
So having files like /var/log/eb-docker/containers/eb-current-app/add839a3b599-stdouterr.log to appear in Cloudwatch.
So, what I have tried so far is adapt the webrequests config from the link above:
##############################################################################
## Sends docker logs to CloudWatch Logs
##############################################################################
Mappings:
CWLogs:
ApplicationLogGroup:
LogFile: "/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log"
TimestampFormat: "%Y-%m-%d %H:%M:%S"
Outputs:
ApplicationLogGroup:
Description: "The name of the Cloudwatch Logs Log Group created for this environments web server access logs. You can specify this by setting the value for the environment variable: WebRequestCWLogGroup. Please note: if you update this value, then you will need to go and clear out the old cloudwatch logs group and delete it through Cloudwatch Logs."
Value: { "Ref" : "AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0ApplicationLogGroup"}
Resources :
AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0ApplicationLogGroup: ## Must have prefix: AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0
Type: "AWS::Logs::LogGroup"
DependsOn: AWSEBBeanstalkMetadata
DeletionPolicy: Retain ## this is required
Properties:
LogGroupName:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: ApplicationLogGroup
DefaultValue: {"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "stdouterr"]]}
RetentionInDays: 14
## Register the files/log groups for monitoring
AWSEBAutoScalingGroup:
Metadata:
"AWS::CloudFormation::Init":
CWLogsAgentConfigSetup:
files:
## any .conf file put into /tmp/cwlogs/conf.d will be added to the cwlogs config (see cwl-agent.config)
"/tmp/cwlogs/conf.d/stdouterr.conf":
content : |
[stdouterr]
file = `{"Fn::FindInMap":["CWLogs", "ApplicationLogGroup", "LogFile"]}`
log_group_name = `{ "Ref" : "AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0ApplicationLogGroup" }`
log_stream_name = {instance_id}
datetime_format = `{"Fn::FindInMap":["CWLogs", "ApplicationLogGroup", "TimestampFormat"]}`
mode : "000400"
owner : root
group : root
Unfortunately this doesn't seem to work. :/
Also, does anyone have any idea if logs appear at all if fe. the timestamp format is wrong? Specially important since by default exceptions don't really have timestamps, so the actual errors would just disappear.
My application log lines currently look like this:
2016-07-05 09:11:31 ::1 - GET / 200 (5.107 ms)
You can use this link to setup cloudwatch agents on your beanstalk instances (if you haven't already) - http://serebrov.github.io/html/2015-05-20-cloudwatch-setup.html.
Next - try to send the files in /var/lib/docker/containers//.json to collect your docker logs. It's where the containers stdout and stderr is written to.

Resources