What ColdFusion server settings can affect superagent 'get' content types? - get

Our shop is running two Adobe ColdFusion servers, a development and a production. Production is on ColdFusion 2018 and development is on ColdFusion 2021. When upgrading the development server to 2021, we chose to migrate the server administrator panel settings manually and most likely missed several. We now have the following problem:
In one of our web apps, a GET request (from superagent) is being made to a REST API URL to pull in some JSON. On production, the request comes back with 'content-type' of 'application/JSON', but on the development server, it comes back with 'content-type' of 'text/plain'. This is causing the contents of the request to break code downstream because the field names are different ('response.text' rather than 'response.body'). Relevant snippet:
export const makeGetRequest = (url, callback) => {
window.pendingRequests ? ++window.pendingRequests : window.pendingRequests = 1;
request
.get(url)
.set('Content-Type', 'application/json')
.end((err, response) => {
--window.pendingRequests;
const data = response.body;
if (data === null) {
console.log(url);
}
if (err) callback(err);
else if (data && data.Message) callback(new Error(data.Message));
else callback(null, data);
});
};
Note that manually setting the content type to application/JSON with .set is not resolving the issue. Neither does accessing response.text instead of response.body; the text contents are formatted differently.
The scripts are identical on development and production, so we are left to conclude that the problem must be a server setting to do with JSON serialization, response header content type headings, or something in that domain. Does anyone know what the setting in question is? The ColdFusion server admin panel is so big and we've spent a good deal of time crawling through it changing settings to match production, with no success so far.
Tried manually setting the content type of the response header to application/JSON. Contents do not parse correctly in downstream code.
Tried accessing the response.text field instead of the null response.body field. Contents do not parse correctly in downstream code.
Have been searching for a ColdFusion admin panel setting that can account for the difference between our installations, but have not yet identified a setting that can resolve the issue.

Can you test the GET request to the CF API endpoint with an independent program like Postman? If so, does the request show different content types?
Can you export the CF admin settings from the production server as a CAR file, then import that into the development server? Likely there's a setting difference between the two servers. Alternatively, you could compare the neo*.xml files between the two to see if any specific settings jump out as the culprit. You can't copy the files between versions of ACF, but you should be able to see the relative setting values.

Related

Cookie not being set from node typescript request

I'm trying to set a cookie in a node request. I have tried using packages like js-cookie, cookie-js, cookie and cookie-manager but none work.
The way I have tried it is very straight-forward, whenever my endpoint gets called i.e. https://develop.api/sess/init, I set the cookie at the very beggining of the endpoint with the following code
import * as Cookies from 'js-cookie';
export const init = async (event: APIGatewayEvent, context: Context) => {
...
Cookies.set('hello', 'hello');
...
}
As my endpoint has an auth header, I can not directly call it into my browser URL due to missing permissions, so I tried generating the fetch function with postman and pasting it into my browser's console. The function is the following
var myHeaders = new Headers();
myHeaders.append("Referer", "accepted.referer.com");
myHeaders.append("key", "somekey");
var requestOptions = {
method: 'GET',
headers: myHeaders,
redirect: 'follow'
};
fetch("https://develop.api/sess/init", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
Once called, my request successfully returns the expected response, but it never shows a Set-Cookie header in the network section, neither shows my cookie in the Application section.
I have to mention that I also tried looking for the cookie when making the call within Postman, but it never sets it neither.
Also, I have tried starting the application in localhost, and I have a successful response, but my cookie is still not being set.
About the package showed in the code, I said I have tried it with different ones and their implementations, so I don't think a broken package is the problem.
I'm starting to think that I have a wrong idea about how cookies work, or that someway I am completely blocking the sending of cookies within my code.
Environment
If it helps in any way, my endpoint is being hosted in a AWS Lambda application.
I know this should be trivial, but being battling with it for a day now.
I finally answered my own issue. The key here is that I'm using AWS lambdas as the proxy, therefore, the headers I were using to send the cookies were wrong, I was sending the cookies with the endpoint instead of within the lambda. Let me explain myself.
I was adding 'Set-Cookie':'cookieKey:cookieVal' in the headers of the Postman Call that I was using to test both my local and develop environments.
Instead of that, I needed to send the request within the response of the lambda for the cookies to be registered.
Please check at the following links for similar cases ->
https://aws.amazon.com/blogs/compute/simply-serverless-using-aws-lambda-to-expose-custom-cookies-with-api-gateway/
https://forum.serverless.com/t/how-to-send-a-cookie-as-a-response/1312/7

Access header values calculated by node http server

Нello, please take a look at this simple file server.js:
require('http').createServer((req, res) => {
res.end('hiii');
console.log('Response headers:', res.getHeaders());
}).listen(80);
Navigating to localhost:80 in my browser hits this endpoint. This causes the response of hiii to appear in the browser, and also the headers to be logged to stdout.
The strange thing is, the headers logged to stdout disagree with the headers the browser received.
Stdout shows me an object representing 0 headers:
Response headers: [Object: null prototype] {}
Developer tools show me that in fact, 3 response headers were received:
What accounts for this difference? I understand that the 3 headers shown in chrome are very fundamental to http. Is chrome receiving 0 headers, but filling them in by default? Is node's http library filling these headers in by default? If that's the case, why aren't they exposed via res.getHeaders()? Are these headers being calculated at some lower level, as in C libraries? If so is there any means of exposing these values?
I tried the following in case there is some kind of async delay where the headers are calculated:
require('http').createServer((req, res) => {
res.end('hiii');
setTimeout(() => console.log('Response headers:', res.getHeaders()), 3000);
}).listen(80);
But nonetheless, 0 headers are sent to stdout.
Somewhere, these 3 headers are being calculated! How can I access these calculated header values??
I found them under res.socket._httpMessage._header
require('http').createServer((req, res) => {
res.setHeader('HELLO', 'WORLD')
res.end('hiii');
console.log(res.getHeaders())
console.log(res.socket._httpMessage._header)
}).listen(8000);
You can't get them using getHeaders because they are not set using the regular API but they are sent by nodejs internaly.
If you define a header yourself using setHeader then you'll be able to retrieve it using getHeaders
Response headers are not generated by the client since it needs them to be able to process the response. How can the client read the request if it doesn't know the size of the body (Content-Length).
These communication rules are defined in the HTTP RFC and the server must implement them so the client can understand the message it receives.
NodeJS calculates these information internaly (source code exampe here) and probably sends them somehow in the socket without storing them in the high level API that you are using. (another source code example)

Downsides of an API which neglects http method and path

I'm wondering what the downsides would be for a production server whose api is totally ignorant of the HTTP request path. For example, an api which is fully determined by query parameters, or even fully determined by the http body.
let server = require('http').createServer(async (req, res) => {
let { headers, method, path, query, body } = await parseRequest(res);
// `headers` is an Object representing headers
// `method` is 'get', 'post', etc.
// `path` could look like /api/v2/people
// `query` could look like { filter: 'age>17&age<35', page: 7 }
// `body` could be some (potentially large) http body
// MOST apis would use all these values to determine a response...
// let response = determineResponse(headers, method, path, query, body);
// But THIS api completely ignores everything except for `query` and `body`
let response = determineResponse(query, body);
doSendResponse(res, response); // Sets response headers, etc, sends response
});
The above server's API is quite strange. It will completely ignore the path, method, headers, and body. While most APIs primarily consider method and path, and look like this...
method path description
GET /api - Metadata about api
GET /api/v1 - Old version of api
GET /api/v2 - Current api
GET /api/v2/people - Make "people" db queries
POST /api/v2/people - Insert a new person into db
GET /api/v2/vehicles - Make "vehicle" db queries
POST /api/v2/vehicles - Insert a new vehicle into db
.
.
.
This API only considers url query, and looks very different:
url query description
<empty> - Metadata about api
apiVersion=1 - Old version of api
apiVersion=2 - Current api
apiVersion=2&table=people&action=query - Make "people" db queries
apiVersion=2&table=people&action=insert - Add new people to db
.
.
.
Implementing this kind of api, and ensuring clients use the correct api schema is not necessarily an issue. I am instead wondering about what other issues could arise for my app, due to writing an api with this kind of schema.
Would this be detrimental for SEO?
Would this be detrimental to performance? (caching?)
Are there additional issues that occur when an api is ignorant of method and url path?
That's indeed very unusual but it's basically how a RPC web api would work.
There would not be any SEO issue as far as I know.
Performance/caching should be the same, as the full "path" is composed of the same parameters in the end.
It however would be complicated to use with anything that doesn't expect it (express router, fancy http clients, etc.).
The only fundamental difference I see is how browsers treat POST requests as special (e.g. won't ever be created just with a link), and your API would expose deletion/creation of data just with a link. That's more or less dangerous depending on your scenario.
My advice would be: don't do that, stick to standards unless you have a very good reason not to.

cors+s3+browser cache+chrome extension

Yes. this is a complex question. i will try to nake it brief.
My website fetches resources from s3.
I also have an extension that needs to prefetch that s3 file when someone does a google query, so later when they go on my site ,the resource is cached.
At this point I should probably stress that I'm not doing anything malicious. just a matter of user experience.
My problem is. that making an ajax request to s3 fron the extension (either from content-script or background) doesn't send an origin header.
This means that the resource is downloaded and cached without an allow origin header. s3
doesnt add that allow-origin:* if theres no origin in the request. so later, on my site it fails due to missing allow-origin header in cached file :-(
Any ideas on a better way to prefetch to browser cache?
Is there a way to force the ajax request to send an origin? Any origin?
Since I have an allow-origin:* on my s3 bucket, I think any origin will do accept null.
Thanks
Edit: Ended up using one of Rob W's solutions. You are awesome.
Let me comment on each of the options he suggested:
Not to add the host premissions on my manifest - clever idea but wouldn't work for me since I have a content script which runs on any website, so I must use a catch-all wildcard, and I don't think there is an "exclude" permission option.
I tried it, it issues a null origin, which as expected ends up in S3 sending the allow-origin:* header as required. this means I don't get that "allow-origin header missing" error, however the file is then not served from cache. I guess for it to be actually served from cache in chrome this has to be exactly the same origin. so that was very close but not enough.
third option is a charm. And it is the simplest. I didn't know I was able to manipulate the origin header. So I do that and set the exact origin of my website - And it works. The file is cached and later served from cache. I must stress that i had to add a Url filter to only apply this to requests going out to my s3 bucket, otherwise I expect this would wreak havoc on the user's browser.
Thanks. Well done
You've got three options:
Do not add the host permission for S3 to your manifest file. Then the extension will not have the blanket permission to access the resource, and an Origin request header will be sent with the request.
Use a non-extension frame to perform the AJAX request. For example, the following method will result in a cross-origin GET request with Origin: null.
function prefetchWithOrigin(url) {
var html = '<script>(' + function(url) {
var x = new XMLHttpRequest();
x.open('GET', url);
x.onloadend = function() {
parent.postMessage('done', '*');
};
x.send();
} + ')(' + JSON.stringify(url) + ');</script>';
var f = document.createElement('iframe');
f.src = 'data:text/html,' + encodeURIComponent(html);
(document.body || document.documentElement).appendChild(f);
window.addEventListener('message', function listener(event) {
// Remove frame upon completion
if (event.source === f.contentWindow) {
window.removeEventListener('message', listener);
f.remove();
}
});
}
Use the chrome.webRequest.onBeforeSendHeaders event to manually append the Origin header.

Express request is called twice

To learn node.js I'm creating a small app that get some rss feeds stored in mongoDB, process them and create a single feed (ordered by date) from these ones.
It parses a list of ~50 rss feeds, with ~1000 blog items, so it's quite long to parse the whole, so I put the following req.connection.setTimeout(60*1000); to get a long enough time out to fetch and parse all the feeds.
Everything runs quite fine, but the request is called twice. (I checked with wireshark, I don't think it's about favicon here).
I really don't get it.
You can test yourself here : http://mighty-springs-9162.herokuapp.com/feed/mde/20 (it should create a rss feed with the last 20 articles about "mde").
The code is here: https://github.com/xseignard/rss-unify
And if we focus on the interesting bits :
I have a route defined like this : app.get('/feed/:name/:size?', topics.getFeed);
And the topics.getFeed is like this :
function getFeed(req, res) {
// 1 minute timeout to get enough time for the request to be processed
req.connection.setTimeout(60*1000);
var name = req.params.name;
var callback = function(err, topic) {
// if the topic has been found
if (topic) {
// aggregate the corresponding feeds
rssAggregator.aggregate(topic, function(err, rssFeed) {
if (err) {
res.status(500).send({error: 'Error while creating feed'});
}
else {
res.send(rssFeed);
}
},
req);
}
else {
res.status(404).send({error: 'Topic not found'});
}};
// look for the topic in the db
findTopicByName(name, callback);
}
So nothing fancy, but still, this getFeed function is called twice.
What's wrong there? Any idea?
This annoyed me for a long time. It's most likely the Firebug extension which is sending a duplicate of each GET request in the background. Try turning off Firebug to make sure that's not the issue.
I faced the same issue while using Google Cloud Functions Framework (which uses express to handle requests) on my local machine. Each fetch request (in browser console and within web page) made resulted in two requests to the server. The issue was related to CORS (because I was using different ports), Chrome made a OPTIONS method call before the actual call. Since OPTIONS method was not necessary in my code, I used an if-statement to return an empty response.
if(req.method == "OPTIONS"){
res.set('Access-Control-Allow-Origin', '*');
res.set('Access-Control-Allow-Headers', 'Content-Type');
res.status(204).send('');
}
Spent nearly 3hrs banging my head. Thanks to user105279's answer for hinting this.
If you have favicon on your site, remove it and try again. If your problem resolved, refactor your favicon url
I'm doing more or less the same thing now, and noticed the same thing.
I'm testing my server by entering the api address in chrome like this:
http://127.0.0.1:1337/links/1
my Node.js server is then responding with a json object depending on the id.
I set up a console log in the get method and noticed that when I change the id in the address bar of chrome it sends a request (before hitting enter to actually send the request) and the server accepts another request after I actually hit enter. This happens with and without having the chrome dev console open.
IE 11 doesn't seem to work in the same way but I don't have Firefox installed right now.
Hope that helps someone even if this was a kind of old thread :)
/J
I am to fix with listen.setTimeout and axios.defaults.timeout = 36000000
Node js
var timeout = require('connect-timeout'); //express v4
//in cors putting options response code for 200 and pre flight to false
app.use(cors({ preflightContinue: false, optionsSuccessStatus: 200 }));
//to put this middleaware in final of middleawares
app.use(timeout(36000000)); //10min
app.use((req, res, next) => {
if (!req.timedout) next();
});
var listen = app.listen(3333, () => console.log('running'));
listen.setTimeout(36000000); //10min
React
import axios from 'axios';
axios.defaults.timeout = 36000000;//10min
After of 2 days trying
you might have to increase the timeout even more. I haven't seen the express source but it just sounds on timeout, it retries.
Ensure you give res.send(); The axios call expects a value from the server and hence sends back a call request after 120 seconds.
I had the same issue doing this with Express 4. I believe it has to do with how it resolves request params. The solution is to ensure your params are resolved by for example checking them in an if block:
app.get('/:conversation', (req, res) => {
let url = req.params.conversation;
//Only handle request when params have resolved
if (url) {
res.redirect(301, 'http://'+ url + '.com')
}
})
In my case, my Axios POST requests were received twice by Express, the first one without body, the second one with the correct payload. The same request sent from Postman only received once correctly. It turned out that Express was run on a different port so my requests were cross origin. This caused Chrome to sent a preflight OPTION method request to the same url (the POST url) and my app.all routing in Express processed that one too.
app.all('/api/:cmd', require('./api.js'));
Separating POST from OPTIONS solved the issue:
app.post('/api/:cmd', require('./api.js'));
app.options('/', (req, res) => res.send());
I met the same problem. Then I tried to add return, it didn't work. But it works when I add return res.redirect('/path');
I had the same problem. Then I opened the Chrome dev tools and found out that the favicon.ico was requested from my Express.js application. I needed to fix the way how I registered the middleware.
Screenshot of Chrome dev tools
I also had double requests. In my case it was the forwarding from http to https protocol. You can check if that's the case by looking comparing
req.headers['x-forwarded-proto']
It will either be 'http' or 'https'.
I could fix my issue simply by adjusting the order in which my middlewares trigger.

Resources