I have a custom HTML file set up for B2C's sign in / sign up user flow that looks like this:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="css/sign_up.css">
<title>My Sign up</title>
</head>
<body>
<div id="api">
</div>
</body>
</html>
I've hosted this in my web app service and placed the url into the Custom Page URI field in the flow. Screenshot here.
However, when I hit "Run User Flow" the default Microsoft selfAsserted page is still loaded. Is there anything that would cause this to happen?
To clarify: I have hit save after entering the URI and the Custom Page column says "Yes" for Local account sign up page.
You should check again, make sure the custom page status is Yes. But in your screenshot, the status is No for the custom page.
It turned out to be a CORS issue. Adding https://<resourcegroup>.b2clogin.com to my app service's CORS whitelist resolved the problem.
Related
Here is a sample url that returns JSON of the instagram user's data: https://www.instagram.com/therock/?__a=1
And it returns JSON like this:
{
"logging_page_id":"profilePage_232192182",
"show_suggested_profiles":true,
"show_follow_dialog":false,
"graphql":{
"user":{
"biography":"founder",
"blocked_by_viewer":false,
"business_email":null,
"restricted_by_viewer":false,
"country_block":false,
"external_url":"https://projectrock.online/7ad",
"external_url_linkshimmed":"https://l.instagram.com/?u=https%3A%2F%2Fprojectrock.online%2F7ad&e=ATMKh6M0eOgq-_jVoR3-xJ0Q2wwVSenYemMoYM0A0nWrW9Y5P7mDXX1dkk2dDLidhEuV1Wees7Z3teLJqp7vB2k&s=1",
"edge_followed_by":{
"count":199139001
},
"followed_by_viewer":false,
"edge_follow":{
"count":406
},
"follows_viewer":false,
"full_name":"therock",
"has_ar_effects":false
I am working on an ASP.NET Core API and have an endpoint that takes in instagram handle and parses the JSON. It works fine locally, but when I hit the same endpoint on the Azure-deployed API, I get the log in page instead:
<!DOCTYPE html>
<html lang="en" class="no-js not-logged-in client-root">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>
Login • Instagram
</title>
<meta name="robots" content="noimageindex, noarchive">
<meta name="apple-mobile-web-app-status-bar-style" content="default">
<meta name="mobile-web-app-capable" content="yes">
<meta name="theme-color" content="#ffffff">
<meta id="viewport" name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, viewport-fit=cover">
<link rel="manifest" href="/data/manifest.json">
I tried by using a third party browser-as-service (PhantomJsCloud) but returns the same log in page. I thought it was the CORS policy, but fixing it didn't work, and also setting the cookie returned, but to no avail. I am really lost here, I'd be really thankful if anyone can point to why this is happening. Thank you!
probably instagram don't want you to fetch it like that and has some mechanism to identify that your request is done programmatically. I assume when you call it in the browser it is working. You can try to cypress or puppeteer to still make it work or probably use the official api with tokens etc.
EDIT:
okay.. I played a little bit around and could make it somehow work, but not sure how reliable this is:
first I started with the following: https://codelike.pro/fetch-instagram-posts-from-profile-without-__a-parameter/
after having the parsed JSON object I searched for entry_data.ProfilePage[0].graphql.user.edge_owner_to_timeline_media.page_info.end_cursor --> used end_cursor for the following request:
https://www.instagram.com/graphql/query/?query_id=17888483320059182&id=928659671&first=100&after= where you need to used the end_cursor for the &after query param. query_id is for Media in the instagram account, id is the id of the instagram account (you can get the id of the instagram account from the parsedObject)
query_id is some kind of hardcoded thing from instagram, other ids can be found here: https://gist.github.com/Carlos-Henreis/2df27431fa5d7a84b7a5e57ee1bf6ae2#file-query_id-csv
Edit 2:
Realized this will only work when your ip is also not detected by instagram or you send a cookie of a logged-in session, otherwise you wont get the ProfilePage but a LoginAndSignupPage instead unfortunately
for more info, see here: https://stackoverflow.com/a/57722553/5195852
I developed a Outlook Web Add-in using Visual Studio 2017, and so far all my testing is based on hosting the Add-in from localhost, and I had no issues with that, everything worked fine. Now, I moved my Add-in to a shared folder on my Sharepoint server, so that others can test my Add-in.
Within my manifest file, I changed the line which defines the URL of my function file to point to where it is hosted:
<FunctionFile resid="FunctionFile.Url" />
I also added in a line under
<AppDomain>https://<My URL Domain></AppDomain>
The image of my add-in icon loads find, however when I click on my add-in icon from my OWA page, I get the following error:
SEC7120: [CORS] The origin 'https://' failed to allow a cross-origin document resource at 'ms-appx-web:///assets/errorpages/forbidframingedge.htm#https:///Functions/FunctionFile.html?et='.
Is there any way to allow my add-in to run? I'm currently doing my testing on the Edge Browser.
Thanks!
Update:
Here's my function file html code:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
<title></title>
<script src="../Scripts/jquery-3.3.1.min.js" type="text/javascript">
</script>
<script src="../Scripts/Office/MicrosoftAjax.js" type="text/javascript">
</script>
<script src="../Scripts/Office/1/office.js" type="text/javascript">
</script>
<script src="FunctionFile.js" type="text/javascript"></script>
</head>
<body>
<!-- NOTE: The body is empty on purpose. Since this is invoked via a button, there is no UI to render. -->
</body>
</html>
I create a web browser in python3 with pygobject (gtk3 and webkit2) and I want create a home page include google. I create a html file with a iframe but I see the error :
Refused to display 'https://www.google.com/' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'.
How I can set X-Frame-Options ? All the solution in the web is a configuration in a local serveur but I don't have local serveur.
Here is my home page
<!DOCTYPE html>
<html>
<head>
<title>(Nouvelle page)</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
</head>
<body>
<iframe src="https://www.google.com/"></iframe>
</body>
</html>
This is not something you can fix locally, unfortunately.
There is a similar question here: https://stackoverflow.com/a/8700754/2773979
The problem isn't that your page is missing that header, it is that Google sets this header precisely to prevent people from embedding the site into an iframe. Browsers comply to this by refusing to load/display the content of the iframe.
Note that there are solutions, like proxying the google page, but those are probably against the terms of service.
Puppeteer's Click API does not trigger on image map element.
I am using a puppeteer for scraping different e-commerce sites. Some e-commerce sites show a popup on page ready. I am trying to close that popup using click api by targeting element but somehow getting an error as "Node is either not visible or not an Html Element".
I have applied click on selectors:
coords='715,5,798,74'
#monetate_lightbox_mask'
body>div>div:nth-child(1)
body>div:nth-child(1):div:nth-child(1)
URLs for scraping:
https://www.hayneedle.com/product/humantouchijoymassageanywherecordlessportablemassager.cfm
https://www.hayneedle.com/product/napoleonfiberglowventedgaslogset.cfm
https://www.hayneedle.com/product/napoleonsquarepropanefirepittable1.cfm
Please suggest.
Regards,
Manjusha
I would personally use the following to wait for and click the close button:
const close_button = await page.waitForSelector( '[id$="ltBoxMap"] > [href="#close"]' );
await close_button.click();
But unfortunately, it appears that the website has implemented bot detection and is displaying the following page:
The source of the resulting web page looks like this:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en-US"><head profile="http://gmpg.org/xfn/11">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=1000">
<meta name="ROBOTS" content="NOINDEX, NOFOLLOW">
<meta http-equiv="cache-control" content="max-age=0">
<meta http-equiv="cache-control" content="no-cache">
<meta http-equiv="expires" content="0">
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT">
<meta http-equiv="pragma" content="no-cache">
<title></title>
</head>
<body>
<h1>Access To Website Blocked</h1>
</body></html>
The bot detection service cannot be fooled simply by changing the user agent, so you will need to experiment with some other methods to bypass the service if you would like to scrape the website.
I have a website (just for my own references, nothing interesting for the public.)
When I load my page (Test Page) inside IE9 and view the source of the page - I can see the HTML as expected.
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<title>Test Page</title>
</head>
<body>
<div id="body">
Simple test page, with an image. <br />
<img src="http://www.w3.org/2008/site/images/logo-w3c-mobile-lg" alt="WC3 logo" />
</div>
</body>
</html>
But when I look at the developers toolbar (by pressing f12) the HTML appears in a <framset> tag.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Marrowbrook.com </title>
</head>
<frameset rows="100%,*" border="0">
<frame src="http://217.118.128.188/wotney//TestFiles/testpage.htm" frameborder="0" />
<frame frameborder="0" noresize />
</frameset>
<!-- pageok -->
<!-- 02 -->
<!-- ->
</html>
Using Chrome, if I right click and View Source, I see the above <frameset> code, but I can also right click and select View Frame Source where I can see the HTML as expected.
Can anyone tell me why I'm seeing this ?
Thanks.
This could happen because your host name was bought with one provider, but you are hosting it on another - and you got a frame based redirect setup.
What platform is your site hosted on? It looks like the server is doing something, because the src of the frame in the frameset points to your page. It could be some kind of 'preview mode' or something of the server/cms. So it looks like the server is using a default page with a frameset on it, that pulls your actual page into it after you deploy it
It also happens when the domain you are using to get to the site is set as "Masked" Forwarding.
Check with the domain manager on your hosting and remove masked forwarding.