Cypress issue with connection to the site is not secure? - security

I'm testing the website which have request to optimizely api to do some checking.
It request to url like https://cdn.optimizely.com/datafiles/XXX.json I suppose that this site required secure network.
I tried to open the url in cypress chrome and I get this error
This page isn’t workingcdn.optimizely.com didn’t send any data.
ERR_EMPTY_RESPONSE
But when I tried with the same network in chrome, I get fine response.
I need to be able to load the url to test my site.
Is there any solution to this matter. Please advice.

The resource returns 403 status code, that most likely indicates you don't have sufficient rights to see it:
Your Chrome outside Cypress runs might be set up differently, might already have session cookies.
You most likely need to figure out how to log into some account on the site throught Cypress.

Since cypress will not load optimizely neither nor google-analytics or any . My work around solution is by using cy.intercept() function in before/beforeEach
The code looks something like
cy.intercept('https://cdn.optimizely.com/datafiles/XXX8.json', {
"version": "4"
}
Reference: cypress-example-recipes

Related

Scraping AWS with Puppeteer runs locally but fails on Heroku

I know it sounds a lot like other issues here in Stackoverflow, bear with me, it's not (not that I could tell)
I have a scraping app (using Puppeteer) that I use to scrape an Amazon public page.
It works great, I've debugged it by setting the headless: false and I see it works, and it gives me back the expected result.
The same app fails on Heroku, but the problem is not with launching or using Puppeteer (I have several indications), but probably because I'm being identified as a robot.
The error returned is:
waiting for selector `#link_continue input` failed: timeout 30000ms exceeded
Important to say that the error is a generic Puppeteer error that indicates that the selector I'm waiting for just doesn't appear on-page.
I know it should as it's a selector on the first page I navigate to, and it works locally (as mentioned before) - the selector always exists if the page loads.
I had the exactly same error when I've tried to run the scraping on my local machine before setting a User-Agent header. But at that time I could use the headless:false so I saw in my eyes that I'm being rejected due to illegal operations on their page (robots-like operations) so I was redirected to an error page that didn't contain this selector on it.
For this reason, I suspect it recognizes me as a robot, but I don't know how to debug it, it drives me crazy.
Now, if you'd like to reproduce the problem:
You need to wait for the mentioned selector on this site:
https://sellercentral.amazon.com/hz/fba/profitabilitycalculator/index
and then deploy it to Heroku and try to run it maybe 2-3 times
** Two questions: **
How can I proceed from here, I'm 99.9% sure it's the same issue I had previously, but I can't verify... any suggestions?
Given that this is actually the problem, can anyone suggest an easy-to-use/deploy hosting that also allow easy VPN configuration? I think Heroku doesn't give you to do that unless you have an enterprise account
Thanks
I would like to point out that Amazon is very good at blocking IPs. It is very likely that they already blacklisted IPs of cloud services like Heroku, Azure, etc... Previously I have observed services like Cloudflare, Akamai etc... blacklisting these known IPs.
In this scenario Rotating proxies could help you to avoid getting blocked.

source URI is not allowed in this document

I am working on a website on my localhost and suddenly I'm now getting this errors.
I get this error on Firefox
<script> source URI is not allowed in this document
And nothing on chrome,` but if I try using the files code, I get:
Application Error: There was a problem getting data for the application you requested. The application may not be valid, or there may be a temporary glitch. Please try again later.
It is basically for: https://connect.facebook.net/en_US/sdk.js.
The browser doesn't even send a GET request for the file.
Everything used to work perfect before. Not sure why I'm getting this.
I had an extension installed on both of my browsers. It was preventing it from loading.
If you have any VPNs, hide-tracking extensions, then it needs to be disabled.
In my case it was Disconnect firefox/chrome extension.

Cypress e2e testing - How to get around Cross Origin Errors?

I'm testing a web app that integrates Gmail, Slack, Dropbox etc. I'm trying to write end to end tests with Cypress.io to verify that auth flows are working. Cypress restricts me from navigating outside my app's domain and gives me a Cross Origin Error. The Cypress docs say that testing shouldn't involve navigating outside your app. But the entire purpose of testing my app is to make sure these outside auth flows are functioning.
The docs also say you can add
"chromeWebSecurity": false
to the cypress.json file to get around this restriction. I have done this, but am still getting cross origin errors (this is at the heart of my question. I would ideally get around this restriction).
I have attempted cypress' single-sign-on example. https://github.com/cypress-io/cypress-example-recipes#logging-in---single-sign-on
I was not able to make it work, and it's a lot more code than I think is necessary.
I've commented on this thread in github, but no responses yet.
Full error message:
Error: CypressError: Cypress detected a cross origin error happened
on page load:
> Blocked a frame with origin "https://www.example.com" from
accessing
a cross-origin frame.
Before the page load, you were bound to the origin policy:
> https://example.com
A cross origin error happens when your application navigates to a new
superdomain which does not match the origin policy above.
This typically happens in one of three ways:
1. You clicked an <a> that routed you outside of your application
2. You submitted a form and your server redirected you outside of your
application
3. You used a javascript redirect to a page outside of your application
Cypress does not allow you to change superdomains within a single test.
You may need to restructure some of your test code to avoid this
problem.
Alternatively you can also disable Chrome Web Security which will turn
off this restriction by setting { chromeWebSecurity: false } in your
'cypress.json' file.
https://on.cypress.io/cross-origin-violation
setting { "chromeWebSecurity": false } in my 'cypress.json' file worked for me
If you are trying to assert the proper navigation to gmail...
You should stub the function that handles that and assert that the request contains the necessary key value pairs. Without more information on the intent of this test it is hard to give specific advice. It sounds like you would want to have a "spy"(type of test double).
Here is the documentation for spies: https://docs.cypress.io/guides/guides/stubs-spies-and-clocks.html#Stubs
If you are trying to verify the contents of the email
You will want to use a library to handle reading gmail. cy.task can be used to invoke JavaScript from an external library. This Medium article has a good write up on how to do this.
Medium article: https://medium.com/#levz0r/how-to-poll-a-gmail-inbox-in-cypress-io-a4286cfdb888
TL;DR of article
Setup and define the custom task(method) that will check gmail(uses "gmail-tester" in the example)
Use cypress to trigger the email(obviously)
Capture/define data(like email subject, dynamic link, email content)
Assert the data returned from gmail-tester is as expected
DON'T
Use the GMail UI in your test in an effort to avoid test flake (all UI testing has flakiness), and potential UI changes to the Gmail app that require updates to your test. The backend methods that gmail-tester uses are less likely to change overtime compared to the UI. You also avoid the CORS error.
Disabling cross-origin security, if you must...(eek bugs!)
If you must, add chromeWebSecurity: false to the cypress.json config file. Be sure to add it inside of the curly braces. There should only be one set of braces in that file.
NOTE: One cannot simply use cy.visit(<diffSuperDomain>); there is an open issue. Apparently this is a very difficult change to make in cypress.
One potential workaround is to only have one super domain per test. It should work if you set the chromeWebSecurity: to false and only have one domain per test(it block). Careful, as it opens you up to cascading failures as one test will rely on the next. Hopefully they fix this soon.
https://docs.cypress.io/guides/guides/web-security.html#Disabling-Web-Security
Since Cypress 9.6.0 you can set "experimentalSessionAndOrigin": true in cypress.json. This allows your tests to operate in multiple domains using the origin command. Example from the official blog:
it('navigates', () => {
cy.visit('/')
cy.get('h1').contains('My Homepage')
cy.origin('www.acme.com', () => {
cy.visit('/history/founder')
cy.get('h1').contains('About our Founder, Marvin Acme') // 👍
})
})
At that blog entry there are also examples how to use this to authenticate at another domain. Worked fine for me with Keycloak using both Chrome and Firefox.
There are a few simple workarounds to these common situations:
Don’t click <a> links in your tests that navigate outside of your application. Likely this isn’t worth testing anyway. You should ask yourself: What’s the point of clicking and going to another app? Likely all you care about is that the href attribute matches what you expect. So make an assertion about that. You can see more strategies on testing anchor links in our “Tab Handling and Links” example recipe.
You are testing a page that uses Single sign-on (SSO). In this case, your web server is likely redirecting you between superdomains, so you receive this error message. You can likely get around this redirect problem by using cy.request() to manually handle the session yourself.
If you find yourself stuck and can’t work around these issues you can just set this in your cypress.json file. But before doing so you should really understand and read about the reasoning here.
// cypress.json
{
"chromeWebSecurity": false
}

Retrieve BLOGS_UPLOADED_IMAGES in java

I have some java code that retrieves blogs through the REST API's. I am not using the social business toolkit, but we have our own framework for that.
The application works perfectly on an on-premise connections environment and has worked on multiple versions.
However when switching to Connections Cloud, some parts stopped worked.
We get a 403 - Forbidden exception on 2 occasions:
Getting the details of a blog post: /blogs/[blog-id]/feed/entry/atom?entryid=[entry-id]
Getting images inside the blog post: /blogs/[blog-id]/resource/BLOGS_UPLOADED_IMAGES/[image file name]
I have fixed issue 1) by switching to the plublishing API: /blogs/[blog-id]/api/entries/[entry-id].
I cannot find a way to fix issue 2). I have also found 2 other image urls:
https://apps.ce.collabserv.com/blogs/[blog-id]/api/media/[file-name]
https://apps.ce.collabserv.com/blogs/[blog-id]/api/media/BLOGS_UPLOADED_IMAGES/[file-name].media
Both return:
<sp_0:error xmlns="http://incubator.apache.org/abdera" xmlns:sp_0="http://incubator.apache.org/abdera">
<code>404</code>
<message>Not Found</message>
</sp_0:error>
I want to authenticate by using Basic Authentication when possible. This does not appear to work with the given 403 urls.
My guess is that this the basic authentication header is not picked up. I have seen this before.
I used to fix this by first calling another URL that does support basic authentication and using the Ltpa cookies to authenticate the image url.
This also does not work: I do get LtpaTokens, but when I pass all the cookies to the URL, the image still does not work.
I prefer not to use OAuth of OAuth 2 at this moment. Is there any other way to fix this?
Anybody else managed to retrieve BLOGS_UPLOADED_IMAGES?
The issue is can also be reproduced in a browser.
Make sure you are not yet authenticated and the blog has posts with
images
Go to /blogs/[blog-id]/api/media
Authenticate using the popup in the browser The Atom feed now appears. This contains the images of your blog.
403 when opening:
/blogs/[blog-id]/resource/BLOGS_UPLOADED_IMAGES/[image]
404 xml when opening: /blogs/[blog-id]/api/media/* links

Heroku Node.js sample facebook app does not work in Google Chrome

The Heroku app i'm trying to get to work (code here):
https://github.com/heroku/facebook-template-nodejs
"Unsafe Javascript attempt to access frame with URL" errors occur when the page is loaded in chrome.
The login button takes you to facebook but does not actually log you into the app and gives the same errors.
Has anyone got this app to work on Chrome or can anyone advise as to how to patch it up?
P.S. it seems to work fine on Mozilla.
Almost certain this is a cross domain policy issue, as stated above. Generally speaking, you just need to add the correct header info to the response.
Access-Control-Allow-Origin: *
In Node, I think it is just a matter of adding it as another header in the response, using
response.writeHead
See http://nodejs.org/api/http.html#http_response_writehead_statuscode_reasonphrase_headers
Oh, and there's explicit instructions on how to do it if you're using Express. I see no reason why it can't work using plain old node then.
http://enable-cors.org/server_expressjs.html
So I looked at your link, in your case I think you just have to enter the header info prior to using any other express app methods.
As to why it works in Firefox and not Chrome, not sure. Both support CORS many versions back. Maybe you have some Chrome extension that's interfering.

Resources