In my chrome extension I am blocking user's downloads, and I am downloading the files in some other computer (some security app but it's not impotent for the post) based on the download url.
The idea works fine for a lot of cases, for example if user trying to download the file http://www.orimi.com/pdf-test.pdf with his chrome browser, the the extension blocks the download, sends the url to some other server and the server downloads the link.
I have problem with websites that asks some headers when downloading the file, is there a way to mimic the exact chrome request from other app?
I tried using chrome.webRequest.onBeforeSendHeaders.addListener to get all request headers and then to use those headers to download the file from other place (I mean not via chrome but via poatman) and I get Unauthorized
Here a small code example on the bacgroung script:
chrome.downloads.onCreated.addListener(function (e) {
console.log(`============= begin onCreated =============`);
console.log(e);
console.log(`============= end onCreated =============`);
});
chrome.webRequest.onBeforeSendHeaders.addListener(
function (details) {
console.log(`============= begin onBeforeSendHeaders =============`);
console.log(details);
console.log(`============= end onBeforeSendHeaders =============`);
return {
requestHeaders: details.requestHeaders
};
}, {
urls: ["<all_urls>"]
},
["blocking", "requestHeaders"]);
And when I am trying to download file from LinkedIn I get the output:
============= begin onBeforeSendHeaders =============
background.js:27 {frameId: 0, initiator: "https://www.linkedin.com", method: "GET", parentFrameId: -1, requestHeaders: Array(7), …}frameId: 0initiator: "https://www.linkedin.com"method: "GET"parentFrameId: -1requestHeaders: Array(7)0: {name: "Upgrade-Insecure-Requests", value: "1"}1: {name: "User-Agent", value: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWeb…ML, like Gecko) Chrome/84.0.4147.89 Safari/537.36"}2: {name: "Accept", value: "text/html,application/xhtml+xml,application/xml;q=…,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"}3: {name: "Sec-Fetch-Site", value: "same-origin"}4: {name: "Sec-Fetch-Mode", value: "navigate"}5: {name: "Sec-Fetch-User", value: "?1"}6: {name: "Sec-Fetch-Dest", value: "document"}length: 7__proto__: Array(0)requestId: "305371"tabId: 844timeStamp: 1596530356040.051type: "main_frame"url: "https://www.linkedin.com/dms/C4D06AQGz1fU0o0r3ZQ/messaging-attachmentFile/0?m=AQLmTdbXe5dgKgAAAXO4axnTdJJLiaU6EnZbq_fhQzg_1697ToPaTbJ3jw&ne=1&v=beta&t=QCDqVeorWfXEAgQBCQdo9hbEQrxwM97zqzCvLuBE2Cw#S6555100544749322240_500"__proto__: Object
background.js:28 ============= end onBeforeSendHeaders =============
background.js:20 ============= begin onCreated =============
background.js:21 {bytesReceived: 0, canResume: false, danger: "safe", exists: true, fileSize: 0, …}bytesReceived: 0canResume: falsedanger: "safe"exists: truefileSize: 0filename: ""finalUrl: "https://www.linkedin.com/dms/D5D06AQGz1fU0o0r3ZK/messaging-attachmentFile/0?m=AQLpTdbXe5dgKgAAAXO4axnTdJJLiaU6EnZbe_fhQzg_1697ToPaTbJ3jw&ne=1&v=beta&t=QCDqVeorWfXEAgQBCQdo9hbEQrxwM97zqzCvLuBE2Cw#S6555100544749322246_500"id: 7100incognito: falsemime: "application/octet-stream"paused: falsereferrer: "https://www.linkedin.com/in/natali-melman-a785a349/"startTime: "2020-08-04T08:39:16.060Z"state: "in_progress"totalBytes: 0url: "https://www.linkedin.com/dms/D5D06AQGz1fU0o0r3ZK/messaging-attachmentFile/0?m=AQLpTdbXe5dgKgAAAXO4axnTdJJLiaU6EnZbe_fhQzg_1697ToPaTbJ3jw&ne=1&v=beta&t=QCDqVeorWfXEAgQBCQdo9hbEQrxwM97zqzCvLuBE2Cw#S6555100544749322256_500"__proto__: Object
background.js:22 ============= end onCreated =============
I tried to mimic this exact request in postman (same header, parameters etc) but I get 401 error code
Related
Using NodeJS and http.get, I am trying to see if a website uses a redirect. I tried a few URLs which all worked great. However, when I ran the code with washingtonpost.com it took over 5 seconds. In my browser the website works just fine. What could be the issue?
console.time("Done. Script executed in");
const http = require("http");
function checkRedirectHttp(input){
return new Promise((resolve) => {
http.get(input, {method: 'HEAD'}, (res) => { resolve([res.headers.location, res.statusCode]) })
.on('error', (e) => { throw {Error: `Cannot reach website ${input}`} });
});
};
checkRedirectHttp("http://www.washingtonpost.com/").then(result => {
console.log(result);
console.timeEnd("Done. Script executed in");
})
Output:
[
'http://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2f',
302
]
Done. Script executed in: 8.101s
I ran your code, enhanced it some and slowly added back the actual headers that are sent from my browser when I go to the same link in the browser. When I changed the request to a "GET" (no longer a "HEAD") and added the following headers from my browser:
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.9",
"cookie": "a very long cookie here"
then the response went from 9 seconds to 71ms.
So, apparently the server doesn't like the HEAD request and doesn't like that a bunch of headers it expects to be there are missing. Probably, it is detecting that this isn't a real browser and it's either analyzing something for 8 seconds or it's just purposely delaying a response to a "fake client".
Also, if you use the http://www.washingtonpost.com URL instead of https://www.washingtonpost.com, it redirects to https every time for me. So, you may as well just start with the https:// form of the URL.
I didn't realize how common and tricky this problem is. I have spent many hour reviewing all the previous situations and answers. Needless to say, none apply.
I am making a httpClient POST call from Angular 5 to a nodejs/express url. The application makes many of these calls and all works except this one:
Angular component
this.ezzy.post(this.config.api.createConnectAccount, this.AuthCode, true, true, true)
.subscribe((data) => {
if (data.code === '0') {
angular http call
ngOnInit() {
........
createConnectAccount(url, body, loadingIcon, withCredentials, showErrorToast) {
console.log(`CREATE CONNECT ACCOUNT....${url}...${JSON.stringify(body)}`);
const headers = this.ezzy.preAjax(loadingIcon, withCredentials);
return this.http.post(url, body, { withCredentials, headers })
.map((res) => this.ezzy.postAjax(res, showErrorToast))
.catch((err) => this.ezzy.handleError(err));
}
I can confirm that both the url and the authCode/body are correct and present up tho this point.
router.post (Nodejs)
router.post('/users/createConnectAccount', async(req, res, next) => {
// console.log (`REQ BODY FROM PAYOUT DASH: ${JSON.stringify(req)}`);
console.log(`ENTER CREATE CONNECT ACCOUNT...code......${req.body.code}`);
console.log(`ENTER CREATE CONNECT ACCOUNT..body......${JSON.stringify(req.body)}`);
console.log(`REQ HEADERS: ${JSON.stringify(req.headers)}`);
Here are the differences with other similar calls:
1. The angular component was activated from an external call to its endpoint (localhost:3000/dealer?code='1234'. The code was retrieved succesfully in the component's constructor and assigned to authCode.
2. The angular http call orginated inside the ngOnInit. I am trying to get some info and update the db before rendering the component page.
I am using
app.use(json());
app.use(urlencoded({
extended: true
}));
and a console.log of the req.header before the call is this:
ENTER CREATE CONNECT ACCOUNT...code......undefined
ENTER CREATE CONNECT ACCOUNT..body......{}
REQ HEADERS: {"host":"localhost:3000","connection":"keep-alive","content-length":"35","accept":"application/json,
text/plain, */*","sec-fetch-dest":"empty","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36","content-type":"text/plain","origin":"http://localhost:3000","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","referer":"http://localhost:3000/payout-dashboard?code=ac_H5nP4MUbEbp94K13jkA5h1DRG6f6pgOn&state=2lt8v9le8a5","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9","cookie":"connect.sid=s%3AsWLHYTY02P2EvYZy1FIVQzZLC6M0vR5p.GnU%2BU20RcjPYeG3lAUEDV9q1vmLceBPAfEE488ej5M4; _ga=GA1.1.695338957.1586021131; _gid=GA1.1.1793736642.1586291730; PDToken=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJlbWFpbCI6InNlYWthaG1haWxAZ21haWwuY29tIiwibmlja25hbWUiOiIiLCJjYXRlZ29yeSI6IiIsImlhdCI6MTU4NjgyMDYyMSwiZXhwIjoxNjE4MzU2NjIxfQ.09gx1F_YJPxAs7BiiYToetdJhjd5DsUUkdoo3leFscU; io=yysQe40_plBblVuSAAAA"}
If you notice that the content-type is:
"content-type":"text/plain"
and the accepted is:
"accept":"application/json,
text/plain, */*"
and the code is present:
code=ac_H5nP4MUbEbp94K13jkA5h1DRG6f6pgOn&state=2lt8v9le8a5"
YET...I get empty req.body.
BTW....it works from postman
ENTER CREATE CONNECT ACCOUNT...code......ac_H5ikfuYleQromTeP5LnHGEmfEWaYD3he
ENTER CREATE CONNECT ACCOUNT..body......{"code":"ac_H5ikfuYleQromTeP5LnHGEmfEWaYD3he"}
REQ HEADERS: {"user-agent":"PostmanRuntime/7.24.1",
"accept":"*/*","postman-token":"0d5faea6-4684-408e-9235-c5e14b306918",
"host":"localhost:3000",
"accept-encoding":"gzip,
deflate, br","connection":"keep-alive",
"content-type":"application/x-www-form-urlencoded",
"content-length":"40","cookie":"connect.sid=s%3ASahJY3VqXVjTjXF1X-SlU_9Shexa59Tm.Q0SRM1h%2FxJnoEnjS3u3I3x%2F%2FnLs%2FLzyiHGoJNuo0U7M"}
Sorry to be so long...but I am baffled
The urlencoded express middleware only parses the body when the Content-Type of the request matches the type option. By default, the type option is application/x-www-form-urlencoded. Either set the Content-Type of your request from text/plain to application/x-www-form-urlencoded, or pass {"type": "text/plain"} to urlencoded(...) to overwrite the default behavior.
I've written a small javascript program using node (v12.16.2) and puppeteer (v2.1.1) that I'm trying to run on an AWS EC2 instance. I'm doing a goto of the url appended to this. It works fine on a local (non-AWS) linux machine with similar versions, but on the EC2, it fails, not showing the page at all. I've tried running with headless=false and devtools=true. In the browser console, I see this:
Uncaught TypeError: Cannot read property 'length' of undefined
at il_Ev (rs=ACT90oFtPziyty36T_zhgMUEStuCtJgAkQ:1862)
at il_Hv (rs=ACT90oFtPziyty36T_zhgMUEStuCtJgAkQ:1849)
at il_Yv.initialize (rs=ACT90oFtPziyty36T_zhgMUEStuCtJgAkQ:1867)
at il__i (rs=ACT90oFtPziyty36T_zhgMUEStuCtJgAkQ:270)
at il_Gl.il_Wj.H (rs=ACT90oFtPziyty36T_zhgMUEStuCtJgAkQ:322)
at rs=ACT90oFtPziyty36T_zhgMUEStuCtJgAkQ:1869
As I mentioned, this same code works fine on a different linux machine and just loaded inside a browser; no errors. I'm stumped. Does anyone know what might be going on? Other pages, like google.com, load fine in the EC2, FYI. TIA.
Reid
https://www.google.com/imgres?imgurl=https%3A%2F%2Fimg-s-msn-com.akamaized.net%2Ftenant%2Famp%2Fentityid%2FAACPW4S.img%3Fh%3D552%26w%3D750%26m%3D6%26q%3D60%26u%3Dt%26o%3Df%26l%3Df%26x%3D992%26y%3D672&imgrefurl=https%3A%2F%2Fwww.msn.com%2Fen-us%2Flifestyle%2Fpets-animals%2F49-adorable-puppy-pictures-that-will-make-you-melt%2Fss-AACSrEY&tbnid=Ad7wBCCmAXPRDM&vet=12ahUKEwig1NfB0Y7oAhXGHc0KHSzuCMUQMygeegQIARBw..i&docid=jawDJ74qdYREJM&w=750&h=500&q=puppies&ved=2ahUKEwig1NfB0Y7oAhXGHc0KHSzuCMUQMygeegQIARBw
Here's an excerpt of the relevant code, which is pretty simple:
const browser = await puppeteer.launch({
headless: false,
devtools: true,
slowMo: 150
});
await browser.userAgent(
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36"
);
/* Get the first page rather than creating a new one unnecessarily. */
let page = (await browser.pages())[0];
await page.setViewport({
width: 1524,
height: 768
});
try {
await page.goto("https://www.google.com/imgres?imgurl=https%3A%2F%2Fimg-s-msn-com.akamaized.net%2Ftenant%2Famp%2Fentityid%2FAACPW4S.img%3Fh%3D552%26w%3D750%26m%3D6%26q%3D60%26u%3Dt%26o%3Df%26l%3Df%26x%3D992%26y%3D672&imgrefurl=https%3A%2F%2Fwww.msn.com%2Fen-us%2Flifestyle%2Fpets-animals%2F49-adorable-puppy-pictures-that-will-make-you-melt%2Fss-AACSrEY&tbnid=Ad7wBCCmAXPRDM&vet=12ahUKEwig1NfB0Y7oAhXGHc0KHSzuCMUQMygeegQIARBw..i&docid=jawDJ74qdYREJM&w=750&h=500&q=puppies&ved=2ahUKEwig1NfB0Y7oAhXGHc0KHSzuCMUQMygeegQIARBw", {
timeout: 0,
// waitUntil: ["load"]
// waitUntil: ["networkidle2"]
});
await page.waitForSelector('#irc_shc', {
visible: true,
timeout: 0
});
} catch(e) {
console.log("error: e = ", e);
}
This was just a temporary google page error, it turns out.
I have some ad calls that are only made on mobile devices. In Chrome, I can use Device Mode and simulate a mobile device, and the resulting ad call from the server is correctly tailored to mobile. I'm not sure how Chrome does this, except possibly by sending a different user agent.
In the Cypress.io documentation, it says the user agent can be changed in the configuration file (Cypress.json). But, I need to run a test for a desktop viewport and then a mobile viewport with a mobile user agent. Is there a way to change the user agent programmatically?
Update: According to https://github.com/cypress-io/cypress/issues/3873 it is possible since Cypress 3.3.0 use user-agent property in a cy.request() and cy.visit().
If you need, for example, set userAgent as Googlebot:
cy.visit(url, {
headers: {
'user-agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
}
});
Original answer before Cypress 3.3.0
before(() => {
cy.visit(url, {
onBeforeLoad: win => {
Object.defineProperty(win.navigator, 'userAgent', {
value: 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
});
},
});
});
Now cypress supports passing user agent in the header for cy.visit as well as cy.request:
it('Verify Social Sharing Meta Tags', () => {
cy.visit(portalURL + '/whats_new/140', {
headers: {
'user-agent': 'LinkedInBot/1.0 (compatible; Mozilla/5.0; Apache-HttpClient +http://www.linkedin.com)',
}
});
cy.document().get('head meta[name="og:type"]')
.should('have.attr', 'content', 'website');
});
https://on.cypress.io/changelog#3-3-0
Update as on Aug 12, 2021:
It seems you can't change the user agent anymore, reference https://docs.cypress.io/api/cypress-api/config#Notes
The other answers do not set the User-Agent header of the underlying HTTP request, just the userAgent property of win.navigator. To set the User-Agent header to a custom value for all HTTP requests, you can set the userAgent configuration option:
{
// rest of your cypress.json...
"userAgent": "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
}
I'm using KrakenJS to build a web app. Being it MVC, I'm implenting a REST service by a controller, here's a sample code:
//users can get data
app.get('myRoute', function (req, res) {
readData();
});
//users can send data
app.post('myRoute', function (req, res) {
writeData();
});
I can read data with no problems. But when I try dummy data insertion with POST requests, it ends up with this error:
Error:Forbidden
127.0.0.1 - - [Thu, 06 Feb 2014 00:11:30 GMT] "POST /myRoute HTTP/1.1" 500 374 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/32.0.1700.102 Chrome/32.0.1700.102 Safari/537.36"
How can I overcome this?
One thing is to make sure you're sending the correct CSRF Headers (http://krakenjs.com/#Security). If I remember correctly, by default Kraken expects those headers to be specified.
You can disable CSRF too and see if that fixes your problem. Since Kraken uses the Lusca module for CSRF, you can get information on how to disable/configure from here: https://github.com/paypal/lusca
I used a trick earlier in which you don't have to turn off csrf...
In your "index.dust" ->
<input id="csrfid" type="hidden" name="_csrf" value="{_csrf}">
In your "script.js" ->
var csrf = document.getElementById('csrfid').value;
$http({ method: 'POST',
url: 'http://localhost:8000/myRoute/',
data: { '_csrf': csrf, 'object': myObject }
}).success(function(result) {
//success handler
}).error(function(result) {
//error handler
});
i was using angularjs btw
As Dan said you can turn csrf off, but you may also want to consider using it, for the added security it brings.
Check out the shopping cart example for more info: https://github.com/lmarkus/Kraken_Example_Shopping_Cart
If you do not need csrf:
By placing this in middleware in your config.json and setting the values to false, you are disabling the use of the csrf middlware, and your app will function as expected.
"middleware": {
"appsec": {
"priority": 110,
"module": {
"name": "lusca",
"arguments": [
{
"csrf": false,
"xframe": "SAMEORIGIN",
"p3p": false,
"csp": false
}
]
}
},