Script file file not being loaded through ScriptLink custom action - sharepoint

I am having trouble with script link custom actions. I am building a SharePoint app, and I successfully added a site-scope custom action pointing to a script file in the Style Library, as I want this particular script to be injected to all the pages of my SharePoint site.
While it works in certain situations, the script link injection breaks without apparent reason under certain conditions. For example, when I arrive on my root web, the script will be injected. But, if I go to a certain link within this web (for example Home or Site Contents), the file that is supposed to be injected will simply not be fetched from the Style Library and therefore never be injected, resulting in an uncaught ReferenceError when I try to call one of the script's function. The weirdest part is that a page refresh through Ctrl+F5 will fetch the script file without any problem, regardless of the page's ability to originally fetch the script file when first accessed. It will keep the script until it is accessed through a link again.
I've read up on Sharepoint caching, thinking it may be the cause of my problem, but the trouble is that these articles mostly talk about cache-induced errors when updating a file, while I am only trying to access it.
One thing to note is that, due to limitations, I am adding the script link custom action through code. Here's an example of what this kind of call currently looks like in my app:
context.Load(context.Site.UserCustomActions);
context.ExecuteQuery();
customAction.Name = "MyScriptLink";
customAction.Location = "ScriptLink";
customAction.Sequence = 100;
customAction.ScriptSrc = "~SiteCollection/Style Library/MySite/MyScript.js";
customAction.Update();
context.ExecuteQuery();
So, what's going on here ? Why is my script no injected on certain pages ? Why does a refresh on these exact same pages manage to fetch the file without any problem ?

Found it ! Three words: Minimum Download Strategy. Disable it, it messes with you page redirect behavior within a SharePoint site (either through code or through site settings)
Edit: If you still want MDS enabled on your site, there is a solution

Related

ClientScript is not triggered when running page from an External URL

I am running a simple suitelet with a form, to which I am adding a clientscript.
form.clientScriptModulePath = './clientScript.js';
It works fine, as long as the suitlet is run from the 'normal' url.
But if External URL is used, clientScript seems to be completely ignored, no error, just ignored.
Are Client Scripts not available for External URL's in NetSuite? Or is there some workaround for it?
I didn't find any documentations for External URL restrictions.
When you select Available Without Login and then save the Script Deployment record, an External URL
field is displayed on the Script Deployment page (see following figure). Use this URL for Suitelets you want
to make available to users who do not have an active NetSuite session.
Note The Website feature must be enabled for Clients Scripts to work in externally available Suitelets
Please go to Set up > company > Enable features >Web presence > Website.
Here is a screenshot for your reference
The Suitelet should be in released status to avoid any other errors.
The following table shows how you can specify the localization context based on the script type.
Script Type
Defining Localization Context Filtering
SuiteScript 2.0 Client Script Type
Complete the following steps to add localization context filtering to client scripts:1. Use the localizationContextEnter and localizationContextExit entry points in your script.
Please let me know how this goes!! Happy coding :)
It's been a while but I think your clientScriptModule path needs to be absolute for it to work externally. I think I ran into this a couple of years ago and that turned out to be the solution.

Chrome Extension: How to inject a content script into an iframe's page

I'm working on a Chrome extension that among other things supports a page with multiple dynamically created iframes in it, pointing to multiple different domains. I need to load a content script into each of those iframes, ideally without loading it into every page.
There's a separate content script that's running on all those iframe pages, which can detect that it's in an applicable iframe, and I'd like it to load this other content script. After some wrangling, it can get the frameId of that iframe, but chrome.tabs.executeScript() takes only tabId, not frameId, so the script loads in the top-level page, not the desired iframe.
Note that the script I want to inject needs to run as a content script, with access to the available Chrome APIs.
Is it possible to do this? How?
Update: Ach, you're of course right wOxxOm, that "frameId can be specified inside executeScript's second parameter". Thank you again, make that an answer and I'll accept it. I need to read more carefully, apparently. I'm a long-time programmer, but new to Chrome extensions, there's a lot to absorb.
Secondary question: It appears that I need to add <all_urls> or http://*/* and https://*/*, permission to the manifest for this to be allowed. The main content_script that's doing this has similar match patterns, and I could add this secondary script there too, but it's actually only needed for pages shown in these iframes, so this seems better to me. Are there other downsides to doing it this way, or is there some better approach, other than xhr/eval?

Why does the Foursquare API JS not work with HTTPS?

In a system I have to maintain (didn't build it, just inherited it) we have a Foursquare implementation that hasn't been used in quite a while. Trying to revive it failed, because our page is now loaded via HTTPS, which it didn't used to be.
We are using the "Save to Foursquare" button as well as the API request to retrieve the number of Check-ins. I already switched all the JS includes and intent links from http to https and at least now it shows the number and the button correctly.
However, I can't click the button and checking the browser's console I found that it added a script tag to the head of this page which tries to access http://platform.foursquare.com/js/modules/widgets.asyncbundle.js. The browser obviously blocks this, because it's not using HTTPS.
The file we are explicitly loading is https://platform.foursquare.com/js/widgets.js. It seems to me like this script is not reacting correctly to HTTP vs. HTTPS. There is probably a very simple solution to this, so what am I missing?
I don't know if you've tried it yet but the foursquare website says this on the matter:
Change the source of the JavaScript file to https://platform-s.foursquare.com/js/widgets.js
Add {"secure":true} to the global configuration block (window.___fourSq)`
The same link (see below) has all the different ways to call the Save To Foursquare function using its .saveTo() function.
https://developer.foursquare.com/overview/widgets
I hope this information and links helps! Cheers.

Can I capture JSON data already being sent with a userscript/Chrome extension?

I'm trying to write a userscript/Chrome extension to capture JSON data being sent while using a web service so that I can reformat it and display selected portion on page. Currently the JSON is sent as the application loads (as I've observed from watching traffic with Fiddler 2). Is my only option to request the JSON again or is capture possible? As I'm not providing a code example, a requested answer is even some guidance on what method / topic to research or if I'm barking up the wrong tree.
No easy way.
If it is for a specific site you might look into intercepting and overwriting part of a code which sends a request. For example if it is sent on a button click you can replace existing click handler with your own implementation.
You can also try to make a proxy for XMLHttpRequest. Not sure if this even possible, never seen a working example. You can look at some attempts here.
For all these tasks you probably would need to run your javascript code out of sandboxed content script to be able to access parent page variables, so you would need to inject <script> tag with your code right into the page from a content script:

how to check if my website is being accessed using a crawler?

how to check if a certain page is being accessed from a crawler or a script that fires contineous requests?
I need to make sure that the site is only being accessed from a web browser.
Thanks.
This question is a great place to start:
Detecting 'stealth' web-crawlers
Original post:
This would take a bit to engineer a solution.
I can think of three things to look for right off the bat:
One, the user agent. If the spider is google or bing or anything else it will identify it's self.
Two, if the spider is malicious, it will most likely emulate the headers of a normal browser. Finger print it, if it's IE. Use JavaScript to check for an active X object.
Three, take note of what it's accessing and how regularly. If the content takes the average human X amount of seconds to view, then you can use that as a place to start when trying to determine if it's humanly possible to consume the data that fast. This is tricky, you'll most likely have to rely on cookies. An IP can be shared by multiple users.
You can use the robots.txt file to block access to crawlers, or you can use javascript to detect the browser agent, and switch based on that. If I understood the first option is more appropriate, so:
User-agent: *
Disallow: /
Save that as robots.txt at the site root, and no automated system should check your site.
I had a similar issue in my web application because I created some bulky data in the database for each user that browsed into the site and the crawlers were provoking loads of useless data being created. However I didn't want to deny access to crawlers because I wanted my site indexed and found; I just wanted to avoid creating useless data and reduce the time taken to crawl.
I solved the problem the following ways:
First, I used the HttpBrowserCapabilities.Crawler property from the .NET Framework (since 2.0) which indicates whether the browser is a search engine Web crawler. You can access to it from anywhere in the code:
ASP.NET C# code behind:
bool isCrawler = HttpContext.Current.Request.Browser.Crawler;
ASP.NET HTML:
Is crawler? = <%=HttpContext.Current.Request.Browser.Crawler %>
ASP.NET Javascript:
<script type="text/javascript">
var isCrawler = <%=HttpContext.Current.Request.Browser.Crawler.ToString().ToLower() %>
</script>
The problem of this approach is that it is not 100% reliable against unidentified or masked crawlers but maybe it is useful in your case.
After that, I had to find a way to distinguish between automated robots (crawlers, screen scrapers, etc.) and humans and I realised that the solution required some kind of interactivity such as clicking on a button. Well, some of the crawlers do process javascript and it is very obvious they would use the onclick event of a button element but not if it is a non interactive element such as a div. The following is the HTML / Javascript code I used in my web application www.so-much-to-do.com to implement this feature:
<div
class="all rndCorner"
style="cursor:pointer;border:3;border-style:groove;text-align:center;font-size:medium;font-weight:bold"
onclick="$TodoApp.$AddSampleTree()">
Please click here to create your own set of sample tasks to do
</div>
This approach has been working impeccably until now, although crawlers could be changed to be even more clever, maybe after reading this article :D

Resources