How to implement logic based on external redirects? - node.js

I'm building a website for a client (real estate), and on the website are links to a different website (adverts for properties). My client routinely activates and deactivates these adverts when he rents out a certain property.
The hrefs on my links look something like this:
<a href="https://domain.xx/estate/idxx/des-crip-tion-xx-xx-x-xx/">. If the advert is indeed active, it just takes them to the advert. If it is not active, however, the website in question redirects the user to https://domain.xx/estate-for-rent/city/, effectively sending the users to my client's competition.
I wish to implement some logic where, before handing the users over to the other website, the server checks to see if it is redirected to https://domain.xx/estate-for-rent/city/, or some similar logic, and if so, uses preventDefault, or something, and notifies the user that the advert is not available instead of sending them to the other website.
I wonder if I can use the fact that only if the advert is active does the resulting url in the users browser window (after they've been directed to the other website) match the url in my href. Can i somehow get the server to try to access the url in my href, and have it see where it gets redirected, and then do something based on that? On the back-end, I'm running NodeJS with Express by the way, and if it matters, I'm relying heavily on EJS for templating. Thanks in advance for any help!

This sounds more like a problem you could solve on the client as opposed to the server. For example, at a high level here's how I would do it:
Handle the click event for each link (really simple to do a catch-all with jQuery)
Fire off a HEAD request via AJAX to the destination URL (this would be much more efficient than a GET but depends on the external service supporting this verb)
Use the status code to determine what to do next (e.g. 2xx allow redirect, 3xx pop a message and block)

Related

How does Blazor hijack the Browser-Back button?

Blazor-Server apps use the SignalR circuit.
I can somewhat understand how there is JS that change events happening from the DOM, so instead of sending a new HTTP GET request the framework manipulates the DOM and displays the new blazor page.
But how is it even possible that the circuit is still active and working on page back button? This is a BROWSER FEATURE, not some html element, which can be changed, right? Would it not be a security issue if the browser back button behavior can be manipulated in different ways?
Not firing a new HTTP GET request on page back seems pretty hacky. Wouldn't that allow for malicious websites to do the same? Can websites access the last page visited with that??
How does the browser "know" that the last page should also use the same websocket circuit?
Is it then possible to tell the browser that it should establish a websocket on a past page, that didn't even have any before (would seem like a security risk)?
How does the back button differ from hitting "enter" in the address bar (which will always cut and establish a new circuit)?
Is the back button exactly the same as calling JS history.back() ?

How do I capture the URL after an external website login in ReactJS?

I want to retrieve the URL after opening an external website pop up in my ReactJS/NodeJS application. Basically in my application, I have a button that redirects the page to microsoft online login page. What I want is the URL of the page after the user logs into microsoft online.
Is there any way that's possible? If so, what are my options?
If you navigate to another webpage, your React application is no longer being served to your browser, and can't do anything. You would need to have a script running on the microsoft website, either by writing it in the source code (which I doubt you can do) or by some other method such as a browser extension.
There is no way to track different systems like methods #izb mentioned, if they already dont provide.
Many systems provides information from their servers, push/ping systems.
One of the payment systems, I redirect request, customer pays, and they redirects the page I entered before in their panel, like successful or fail pages.

When should I have addresses with #?

When should I have addresses with # and when should I have separate address for each page or part of a page.
For example
https://ca.news.yahoo.com/nick-hornby-boys-read-telling-101350029.html
I know sometimes we need to have #, for instance when we call a javascript method to show a lightbox(modal) but some websites are using it in their unique address of their pages.
For example icloud is using it to show its modal when you click on create one now link.
https://www.icloud.com/#
However, as I said some websites are using that as a method to have unique addresses for their pages.
For example following address that is showing a single page of icloud website.
https://www.icloud.com/#find
Is that correct to follow this practice of having # in our unique address of the website pages similar to what icloud website has?
I am not asking about icloud.com thats just an example. What I meant is that if you go to www.icloud.com/#find page you would see it is not a single page website because there is just a header, login page and a footer. So why they are using #find and not something like find.html? Is there any specific reason that I am missing?
URL fragments(#whatever) are a way to address sup-parts of a document. You should keep in mind that these are never sent to or seen by the server so you can't really use them serverside to differentiate between URLs. You can use them to make parts of a static page addressable or, with the right amount of JS contortions, use them as a foundation for addressable navigation within a single page app. Some JS frameworks rely on this fairly explicitly although with is starting to go out of style as most browsers now support the history api.

Is it advisable to configure a URL with both post and get?

I have a link on my page that redirects to another page but the request is sent through POST method. Now when the user refreshes the new page the request is sent through GET method. The URL is just used to display a page. My question is, is it advisable to use both POST and GET for the same URL call or will it cause problems related to security or any other? If so please do explain how.
No. Use POST if the link is executing an unsafe action (e.g. logout, change password) or it is sending sensitive data you do not want displayed in the URL (URLs are logged by default in the browser history and by many appliances, proxies and web servers). POST can also be used when a large amount of information is to be sent.
Otherwise use GET.

Google Chrome Extension - prevent cookie on jquery ajax request or Use a chome.extension

I have a great working chrome extension now.
It basically loops over a list of HTML of a web auction site, if a user has not paid for to have the image shown in the main list. A default image is shown.
My plugin use a jQuery Ajax request to load the auction page and find the main image to display as a thumbnail for any missing images. WORKS GREAT.
The plugin finds the correct image url and update the HTML Dom to the new image and sets a new width.
The issue is, that the auction site tracks all pages views and saves it to a "recently viewed" section of the site "users can see any auctions they have clicked on"
ISSUE
- My plugin uses ajax and the cookies are sent via the jQuery ajax request. I am pretty sure I cannot modify the cookies in this request so the auction site tracks the request and for any listing that has a missing image this listing is now shown in my "recently viewed" even though I have not actually navigated to it.
Can I remove cookies for ajax request (I dont think I can)
Can chrome remove the cookie (only for the ajax requests)
Could I get chrome to make the request (eg curl, with no cookie?)
Just for the curious.
Here is a page with missing images on this auction site
http://www.trademe.co.nz/Browse/SearchResults.aspx?searchType=all&searchString=toaster&type=Search&generalSearch_keypresses=9&generalSearch_suggested=0
Thanks for any input, John.
You can use the webRequest API to intercept and modify requests (including blanking headers). It cannot be used to modify requests which are created within the context of a Chrome extension though. If you want to use this API for cookie-blanking purposes, you have to load the page in a non-extension context. Either by creating a new tab, or use an off-screen tab (using the experimental offscreenTabs API.
Another option is to use the chrome.cookie API, and bind a onChanged event. Then, you can intercept cookie modifications, and revert the changes using chrome.cookies.set.
The last option is to create a new window+tab in Incognito mode. This method is not reliable, and should not be used:
The user can disallow access to the Incognito mode
The user could have navigated to the page in incognito mode, causing cookie fields to be populated.
It's disruptive: A new window is created.
Presumably this AJAX interaction is being run from a content script? Could you run it from the background page instead and pass the data to the content script? I belive the background page operates in a different context and shouldn't send the normal cookies.

Resources