Mongoose Web Server Config - mongoose-web-server

I'm using the binary version of Mongoose embedded web server (mongoose-free-6.4.exe) to test a small project locally. I noticed that all requests are sent with the following header:
Content-Type: text/plain
Is it possible to force all requests to configure this to UTF-8??
Content-Type: text/html; charset=utf-8
I was looking poking around and do we need to recompile any configuration change ? I was unable to find an easy way to do this. Did I misse something?

I actually found the way to do this on mongoose.conf:
# Add your special mime types here
m *.html=text/html; charset=utf-8
I couldn't find a way to configure the default one on the other hand. But I found this on Git with limited information:
https://github.com/cesanta/fossa/issues/238

Related

Setting custom header with API gateway non-proxy lambda and binary output

Is it possible to set a custom header when using lambda non proxy integrations?
At the moment I have enabled binary support and I am returning straight from my handler but I have a requirement to set the file name of the download and was planning to use Content-Disposition: attachment; filename="filename.xlsx" but I am not sure how I can do this if I have lambda proxy integration turned off.
Reading this https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-integration-settings-integration-response.html I am not sure if it only works for json responses?
The example shows the body comment as taking a json object but then says there is a base64 encoding option for binary support, but I am just returning my binary data straight from my function and I had not planned to use lambda proxy at all if possible.
I currently have files downloading but I am using temporary files and I want to name the downloads.
# In my service
with tempfile.NamedTemporaryFile(suffix=".xlsx") as tmp:
pd.DataFrame(report_json).to_excel(tmp.name)
bytes_io = BytesIO(tmp.read())
return bytes_io
# In my handler
return base64.b64encode(bytes_io.getvalue())
Using later versions of the serverless framework a custom header for example Content-Disposition can be set like the following.
Integration: lambda
response:
headers:
Content-Type: "'text/csv'"
Content-Disposition: "'attachment; filename=abc.csv'"
I am not sure yet if it is possible to interpolate values from the context into these values.

How do you remove standard headers from the HTTP response of Azure Functions?

Is it possible to remove these headers?
I have a .NET solution with several Azure functions that show all the header information in the HTTP response when they are called. There is no web.base.config type file that I can add 'removeServerHeader = true' which I have used to solve this problem previously in ASP.NET projects.
However, in my azure function solution there is only a host.json file and I don't think this can be used to do something similar.
I have seen a fix for this in the git repo here but I'm not exactly sure how to implement it so the headers are removed.
Can anyone help with this please? Or point me in the right direction. Thanks!
Firstly the fix you link is about ASP.NET version header, this is fixed in the function v1, the release version is v1.0.11510, so you won't get this header.
Then is about server header, yes in my test I will get this header too with v1 function Here is the fix detail , it removes X-Powered-By and server header and ther version should be 2.0.12493. So actually you don't have to use the v3 function, the latest v2 function already remove these headers.

Deploying CloudFront in front of NodeJS Express

I have an app which serves a dynamic html and some static files written in NodeJS/Express.
I deployed an AWS CloudFront distro in front of it, however only the HTML goes through, all the static files result in 404. The headers of the requests look like:
Age:116
Connection:keep-alive
Content-Length:170
Content-Security-Policy:default-src 'self'
Content-Type:text/html; charset=utf-8
Date:Mon, 09 Oct 2017 14:37:23 GMT
Server:nginx/1.6.2
Via:1.1 523db8f46d98334ac6b5debbf315e15b.cloudfront.net (CloudFront), 1.1 proxy1.xxx.yy (squid/4.0.17)
X-Amz-Cf-Id:5UYpluGn8TxUxsxmmDYYiZnjbOWbZ7iFFit55mmgcN6IbAJHCEAX6Q==
X-Cache:MISS from proxy1.xxx.yy
X-Cache:Error from cloudfront
X-Cache-Lookup:MISS from proxy1.xxx.yy:3128
X-Content-Type-Options:nosniff
X-Powered-By:Express
For info, my nodejs app runs in some port, and nginx reverse-proxies it to the domain I specified using proxy_pass.
As you can see I'm behind another proxy, but this cannot be the problem.
What I believe is happening is that my origin looks like mydomain.com/path/app_id and express serves static files from mydomain.com/.
Has anyone successfully deployed CloudFront in front of NodeJS/Express for static files? I really don't understand what the problem is..
Thanks!
To serve files from another path the flow is as follows:
Add all necessary origins (in this case mydomain.com/path/app_id AND mydomain.com (without trailing /))
Add behaviours on your ditribution for every file type. In this case static files are stored in different folders (like /img). So we can add behaviours for img/*, js/* and css/*. Then each behaviour can be set to a single origin. In this case we choose mydomain.com which was previously named Static files origin.

Change the name and extension of a static file served with node

So I have some static files stored in a uploads folder running in my node.js application, but their name and extension (all of them are PDFs) were replaced by a mysql CHAR(32) string reference in DB.
I need to serve them with a comprehensible name (autogenerated in server) and it's original extension back. Any hints?
You can accomplish this with HTTP headers.
Content-disposition: attachment; filename=Example.pdf
Content-type: application/pdf

AWS S3 Returns 200ok parser fails if ContentEncoding: 'gzip'

My first deploy to AWS.
The files are all in place, and index.html loads.
There are two files in a subdir, one .js and once .css.
They both return 200 but fail to load. Chrome sais it's the 'parser'.
After trying a few things, I noted that this property is causing it: ContentEncoding: "gzip".
If I remove this property the files are found correctly.
Am I using this property incorrectly?
I am using the Node AWS SDK via this great project: https://github.com/MathieuLoutre/grunt-aws-s3
You can witness this behavior for yourself at http://tidepool.co.s3-website-us-west-1.amazonaws.com/
If you specify Content-Encoding: gzip then you need to make sure that the content is actually gzipped on S3.
From what I see in this CSS file:
http://tidepool.co.s3-website-us-west-1.amazonaws.com/08-26_6483218-dirty/all-min.css
the actual content is not gzipped, but the Content-Encoding: gzip header is present.
Also keep in mind that S3 is unable to compress your content on the fly based on the Accept-Encoding header in the request. You can either store it uncompressed and it will work for all browsers/clients or store it in a compressed format (gzip/deflate) and it will only work on some clients that can work with compressed content.
You could also take a look at the official AWS SDK for Node.js.

Resources