Chrome doesn't send cookies after redirect - node.js

In node.js (using Hapi framework) I'm creating link for user to allow my app reading user account. Google handles that request and asks about giving permissions. Then Google makes redirect to my server with GET parameter as a response code and here I have an issue.
Google Chrome isn't sending cookie with session ID.
If I mark that cookie as a session cookie in cookie edit extension, it is sent. Same behavior in php, but php marks cookie as session when creating session, so it isn't problem. I'm using plugin hapi-auth-cookie, it creates session and handles everything about it. I also mark that cookie then in hapi-auth-cookie settings as non HttpOnly, because it was first difference, that I have noticed, when inspecting that PHP session cookie and mine in node.js. I have response 401 missing authentication on each redirect. If I place cursor in adress bar and hit enter, everything works fine, so it is an issue with redirect.
My question is basically, what may be causing that behavior. On the other hand I have to mention that firefox sends cookie after each request without any issues.
Headers after redirect (no cookie with session):
{
"host": "localhost:3000",
"connection": "keep-alive",
"cache-control": "max-age=0",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36",
"x-client-data": "CJS2eQHIprbJAQjEtskECKmdygE=",
"x-chrome-connected": "id=110052060380026604986,mode=0,enable_account_consistency=false",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"accept-encoding": "gzip, deflate, sdch, br",
"accept-language": "pl-PL,pl;q=0.8,en-US;q=0.6,en;q=0.4"
}
Headers after hitting enter in adress bar (what will work fine):
{
"host": "localhost:3000",
"connection": "keep-alive",
"cache-control": "max-age=0",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"accept-encoding": "gzip, deflate, sdch, br",
"accept-language": "pl-PL,pl;q=0.8,en-US;q=0.6,en;q=0.4",
"cookie": "SESSID=very_long_string"
}

Strict cookies are not sent by the browser if the referrer is a different site. This will happen if the request is a redirect from a different site. Using lax will get around this issue, or you can make your site deal with not being able to access strict cookies on your first request.
I came across this issue recently and wrote more detail on strict cookies, referrers and redirects.

This issue is caused by hapi-auth-cookie not dealing yet with isSameSite (new feature of Hapi). We can set it manually, eg.
const server = new Hapi.Server(
connections: {
state: {
isSameSite: 'Lax'
}
}
);
But please consider that, by default you have 'Strict' option, and in many cases you may not want to change that value.

A recent version of Chrome was displaying this warning in the console:
A cookie associated with a cross-site resource at was set
without the SameSite attribute. A future release of Chrome will only
deliver cookies with cross-site requests if they are set with
SameSite=None and Secure.
My server redirects a user to an authentication server if they didn't have a valid cookie. Upon authentication, the user would be redirected back to my server with a validation code. If the code was verified, the user would be redirected again into the website with a valid cookie.
I added the SameSite=Secure option to the cookie but Chrome ignored the cookie after a redirect from the authentication server. Removing that option fixed the problem, but the warning still appears.

A standalone demo of this issue: https://gist.github.com/isaacs/8d957edab609b4d122811ee945fd92fd
It's a bug in Chrome.

Related

Request blocked if it is sent by node,js axios

I am using axios and a API (cowin api https://apisetu.gov.in/public/marketplace/api/cowin/cowin-public-v2) which has strong kind of protection against the web requests.
When I was getting error 403 on my dev machine (Windows) then, I solve it by just adding a header 'User-Agent'.
When I have deployed it to heroku I am still getting the same error.
const { data } = await axios.get(url, {
headers: {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36',
},
})
Using a fake user-agent in your headers can help with this problem, but there are other variables you may want to consider.
For example, if you are making multiple HTTP requests you may want to have multiple fake user-agents to and then randomize the user-agent for every request made. This can help limit the changes of your scraper being detected.
If that still doesn't work you may want to consider optimizing your headers further. Other than sending HTTP requests with a randomized user-agent, you can further imitate a browser's request Headers by adding more Headers than just the "user-agent"- then ensuring that the user-agent that is selected is consistent with the information sent from the rest of the headers.
You can check out here for more information.
On the site it will not only provide information on how to optimize your headers consistently with the user-agent, but also provide more solutions in case the above mentioned still was unsuccessful.
In my situation, it was the case that I had to bypass cloudflare. You can determine if this is your situation as well if you log your error to the terminal and then check if under the "server" key it says "cloudflare". In which case you can use this documentation for further assistance.

Getting 403 forbidden status through python requests

I am trying to scrape a website content and getting 403 Forbidden status. I have tried solutions like using sessions for cookies and mocking browser through a 'User-Agent' header. Here is the code I have been using
session = requests.Session()
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36',
}
page = session.get('https://www.sizeofficial.nl/product/zwart-new-balance-992/343646_sizenl/', headers = headers)
Note that this approach works on other websites, it is just this one which does not seem to work. I have even tried using other headers which my browser is sending them, and it does not seem to work. Another approach I have tried is to first create a session cookie and then pass that cookie to session.get, still doesn't work for me. Is it not allowed to scrape the website or am I still missing something?
I am using python 3.8 requests to achieve this purpose.

Google reCAPTCHA cannot be solved in Electron BrowserWindow

In my Electron app I try to open an external website (e.g. BrowserWindow.lodUrl('www.abc.xyz')), which is protected by Googles reCAPATCHA. The browser Window with the page is open, so the user could solve the captcha and it does not act like a bot.
But somehow, the only response for the reCAPTCHA validation request is
)]}'
["rresp",null,null,null,null,null,1]
Also no reCAPTHCA popup for "street sign" or "crossign" selection appears.
Additionally I get a warning in the console
A cookie associated with a cross-site resource at http://google.com/ was set without the `SameSite` attribute.
A future release of Chrome will only deliver cookies with cross-site requests if they are set with `SameSite=None` and `Secure`.
You can review cookies in developer tools under Application>Storage>Cookies and see more details at https://www.chromestatus.com/feature/5088147346030592 and https://www.chromestatus.com/feature/5633521622188032.
I could solve the problem, by adding the user agent to every request separately.
const userAgent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36';
newSession.webRequest.onBeforeSendHeaders((details, callback: (beforeSendResponse) => void) => {
details.requestHeaders['userAgent'] = userAgent;
callback({cancel: false, requestHeaders: details.requestHeaders});
})

Socket.io on CloudFront (S3) client won't send certificate to Node.js server

We're using the simplest possible implementation of a socket.io client and server to eliminate any variables regarding the cause of this problem. The socket.io client is in JavaScript on AWS CloudFront using a custom domain and the server is on node (nginx). We are getting a secure connection and everything is working as expected except that CloudFront is refusing to pass the certificate. Here is what we get from socket.io regarding the connection:
accept: '*/*',
origin: 'https://cdn.ourdomain.com',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36',
'sec-fetch-site': 'same-site',
'sec-fetch-mode': 'cors',
referer: 'https://cdn.ourdomain.com/ourapp.iframe.html',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
cookie: '_ga=GA1.2.245674994.1569802743; __zlcmid=uXiZmTTN1V8j16; _gcl_au=1.1.312077107.1570950743; _gid=GA1.2.851127118.1572308315; __gads=ID=a55ec67b74740d6a:T=1572647855:S=ALNI_MYuzmlVp2hvDIbUS5LuYBD4kYKHlA; io=gkqtOKgx38ddpH6dAAAA; _gat=1' },
time: 'Sat Nov 02 2019 07:41:25 GMT+0000 (UTC)',
address: '::ffff:127.0.0.1',
xdomain: true,
secure: true,
issued: 1572680485150,
url: '/ourapp-secure-connection/socket.io/?EIO=3&transport=polling&t=Muh3xpR',
query: { EIO: '3', transport: 'polling', t: 'Muh3xpR' } }
Client connected [id=LKTQbOl3_DdAJeH5AAAB, cert={}]
Nothing we've tried has returned anything other than cert={}. I've seen some references in the AWS documentation about CloudFront dropping custom certificate requests. Has anyone found a way around this?
"CloudFront is refusing to pass the certificate" isn't an entirely accurate description of what is happening. Client certificates can't be "passed" through an HTTP reverse-proxy like CloudFront -- it's impossible, by design, because that would be the equivalent of a man-in-the-middle attack. (This is also true of other reverse proxies, like HAProxy in HTTP mode and Amazon Application Load Balancer.)
You can't split open a TLS connection in the middle, by design. It may appear that this is what CloudFront does, but it isn't. Instead, CloudFront is decrypting the payload from the server (or client) and re-encrypting it for transmission to the client (or server)... and it can do this only because there are two separate TLS sessions, one from browser to CloudFront and the other from CloudFront to the server -- CloudFront ties the decrypted payload pipes together, internally.

Granting READER role access to a Subscription in Azure works fine in Postman, but not via Angular. Why?

OK, I might be missing something simple here in Angular, but I could really use some help. I am trying to grant a Service Principal READER role to a Subscription programmatically. If I use PostMan, it works fine. However, when I send the same PUT request via Angular6 I get a 400 error from Azure that says:
The content of your request was not valid, and the original object
could not be deserialized. Exception message: 'Required property
'permissions' not found in JSON. Path 'properties', line 1, position
231.'
The JSON being sent in both cases is:
{
"properties":
{
"roleDefinitionId":"/subscriptions/{some_subscription_guid}/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7",
"principalId":"{some_service_provider_guid}"
}
}
I've captured traffic from both requests, and they show as application/json payloads on the PUT. So I am at a loss of what is deserializing incorrectly through Azure that is causing this error. I am trying to follow the REST instructions documented here: https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-rest
Any ideas what I am missing?
UPDATE
Adding the RAW REQUEST per request. I have replaced any sensitive data (access token, GUIDs etc) without changing anything else from the Fiddler output.
PUT https://management.azure.com/subscriptions/<VALID_SUBSCRIPTION_WAS_HERE>/providers/Microsoft.Authorization/roleDefinitions/7ec2aca1-e4f2-4152-aee2-68991e8b48ad?api-version=2015-07-01 HTTP/1.1
Host: management.azure.com
Connection: keep-alive
Content-Length: 233
Accept: application/json, text/plain, */*
Origin: http://localhost:4200
Authorization: Bearer <VALID_TOKEN_WAS_HERE>
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Content-Type: application/json
Referer: http://localhost:4200/token/<VALID_DOMAIN_WAS_HERE>.onmicrosoft.com/graph
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
{"properties": { "roleDefinitionId":"/subscriptions/<VALID_SUBSCRIPTION_GUID_HERE>/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7", "principalId":"<VALID_OBJECTID_HERE>" }}
Alright, I finally figured out what was going on here. It appears that I was posting to the wrong endpoint. I need to be posting to roleAssignment and not roleDefinitions.
So why did it work in PostMan? It seems there is a fallback from a previous version of the API that supported both when using legacy clients, which for some reason PostMan fell under. However, when posting via Angular it was actively rejecting it.
End result... send to "/Microsoft.Authorization/roleassignments/" with an API version later than "api-version=2015-07-01". All will work.

Resources