Is there standard file names for Terms of Use / Privacy Policy? - browser

Are there standard file names for Terms of Use and Privacy Policies?
For some reason, I remember that Internet Explorer and possibly other browsers used to look for them on the current website, and display a little warning icon...

Are you looking for P3P?
The privacy policy can be retrieved as
an XML file or can be included, in
compact form, in the HTTP header. The
location of the XML policy file that
applies to a given document can be:
specified in the HTTP header of the document
specified in the HTML head of the document
if none of the above is specified, the well-known location
/w3c/p3p.xml is used (for a similar
location compare /favicon.ico)

Related

What does atlOrigin query parameter mean in Jira URL?

When I open a Jira issue link from a third party or copy from a clipboard I always find the URL looks like this:
https://mycompany.atlassian.net/browse/comapnyAlias-issueNumber?atlOrigin=longCharacters
I am curious what does atlOrigin means? and why do they use it?
There is a small explanation here https://developer.atlassian.com/developer-guide/client-identification/
Origin ID for links to Atlassian UI
Identify HTML links into our product user interface with a unique atlOrigin query string parameter added to the URL. This is similar to the UTM parameter used for click tracking in marketing web sites. In this context, uniqueness is guaranteed by Atlassian issuing the origin ID. For example, if linking into Jira’s Issue page:
https://devpartisan.atlassian.net/browse/TIS-11?atlOrigin=abc123
We generate a unique value for each integration so if you have more than 1 integration, please make sure we have properly associated each integration to a different origin value.
We do not recommend using this parameter in REST APIs or OAuth 2.0 flows, where this parameter might not be properly ignored.
Result is very Google Search - unfriendly to come up 😕

How tracking of the web traffic source works?

May be a stupid question, but I can't find any answer to this question on the web.
In Google analytics it is possible to check the origin a connection to our website. My question, how Google can track the origin of those connections?
If there is info in document.referer (for the javascript tracker, with the measurement protocol you'd have to pass a referer as parameter) Google identifies the source as referrer, unless it is configured (in the defaults or per custom settings) as a search engine (which is really just a referrer with a known search parameter). Also via the settings you can exclude urls from the referrer reports so they will appear as direct traffic.
If there are campaign parameters Google uses those (or else a Google click id (gclid) from autotagging in adwords, which serves a similar purpose). If campaign parameters or gclid are stripped out (e.g. by redirects) adwords ad clicks will be reported as organic search.
If there is no referrer and no campaign parameters/gclid (i.e. a direct type in or a bookmark) Google will identify the source as a direct hit, unless you have clicked an adwords ad before. In that case the aquisition report will report the source as CPC (click per cost) in the acquisition report (as Google puts it, they will use the last known marketing channel as source. Direct is not a marketing channel according to Google). However the multichannel reports will identify those more correctly as direct visits (which is why multichannel and acquisition reports usually do not quite match).

moving from permissions to optional_permissions: how to handle content_scripts?

Originally posted this question here:
https://groups.google.com/a/chromium.org/d/msg/chromium-extensions/wbSpXvnO10A/nov36skmnQ0J
My extension has an optional feature that interacts with the user's gmail tab. We don't want to mention mail.google.com domains at all in the permission confirmation that the user sees when first installing the extension. So I moved that entry out of the manifest's permissions block and into the optional_permissions block. We also needed to use a content script tied to mail.google.com, but defining this in the manifest causes the 'mail.google.com' permission warning that is sppoking some users.
I've tried removing the content_script manifest block and using Programmatic Injection instead as describe here. http://developer.chrome.com/extensions/content_scripts.html#pi
However scripts injected that way are not content scripts and don't have access to the needed APIs (chrome.tabs, etc)
Is there some way to get the best of both worlds: use optional_permission, AND get the content scripts added to a matching URL, but only if the user has approved the optional permission?
It seems like you could create a background page, and call chrome.tabs.query against your optional origin to get a list of tabs that match that host. You can then call programmatic injection to the content script (chrome.tabs.executeScript). None of these require the "tabs" permission (many "tabs" functions don't require any special permission, and it intelligently lets you query for tabs whose origins match your optional permission)
You could call this every second or so to see if there are any new tabs for which you haven't yet called executeScript.
It would be nice if this were edge-triggered. See https://code.google.com/p/chromium/issues/detail?id=264704
You can actually get it to be mostly edge triggered by using chrome.tabs.onUpdated.addListener and simply trying to inject every time that is triggered (which will be every time a page loads in any tab, regardless of whether you have permission or not). You'll get a lot of errors in the background script's console when you don't have permission. It will be important to have your content script set a variable like _I_already_executed=true and check for its existence so that you're not injecting multiple times (this event gets triggered several times for each page load)
Now there's the contentScripts.register() API, which lets you programmatically register content scripts.
browser.contentScripts.register({
matches: ['https://mail.google.com/*'],
js: [{file: 'content.js'}]
});
This API is only available in Firefox but there's a Chrome polyfill you can use. The new scripts will work exactly like your regular content scripts in manifest.json.
For a more comprehensive solution you could look into my webext-domain-permission-toggle and webext-dynamic-content-scripts modules, which don't apply directly to your use case but can be helpful to who wants to drop the <all_urls> permission and inject content scripts on demand.

Retrieving files from blog media entries

The tool I'm building needs pull data from IBM Connections Ideation Blogs. I therefore use the Connections API with basic authentication to read Blog Entries. This goes well until the description contains images. When I ask the API to provide media resources for the blog, it does not show any entries of the /BLOGS_UPLOADED_IMAGES location - the one containing images uploaded through the blog's richtext editor. The user I use in my API call is the same user who created blog entries and uploaded pictures.
However the API call DOES contain images I publish using the API and a POST request to the blog's media entry collection. This is where the next problem appears. Those Atom entries for images contain various links, one of them with a ref="enclosure", of which the API documentation (link) tells me to "Use the web address in the href attribute to obtain the binary content of the file". However, my calls to this adress are always answered with 404 response code.
Another url in the Atom entry (this time of the element) is described by the same documentation (see link above) as: "Provides access the document's media. The following operation is supported: GET: Use the web address to obtain the media." When I make a call to this url, as always with basic authentication credentials attached, the response contains the html of the login form of Connections, so API authentication does not seem to be supported on this url. This is only the case for non-public communities, which require authentication, of course, if the picture is publicly availabe all works just fine.
Am I missing something out? Is there another way to retrieve the actual image from a blog's media entry through the API? Are manually uploaded pictures never contained in the media entries result or is this a bug?
It now magically works using the link with ref="enclosure" from the atom entry. I might have gotten something wrong with authentication I guess (although I'm not actually realizing what I'm doing different now than I did before).
Problem remaining: Pictures uploaded through the rich-text editor in the folder /BLOGS_UPLOADED_IMAGES do not appear in the media feed of the blog.

Temporary Internet File (read from SharePoint via IE). How do I find the URL?

When an IE user clicks the link of a file residing in SharePoint (and user selects "read-only" access), the file is copied to Temporary Internet Files, my application is opened and passed that filename as a parameter. I'm trying to implement a "check out" button in my app so that a user can switch from read-only mode to check-out and edit mode. I haven't been able to find a way to learn the SharePoint URL for the file. On check-out and edit, it's no problem: there's a registry entry that maps the file on my system to the URL in SharePoint; I haven't found anything like that for read-only files.
EDIT:
There is a URL column available in Windows Explorer, but when I display that column (in Explorer), all the values are blank. Also, I can't find any file information api call that will return this value for me.
UPDATE:
I found some promising calls in the wininet.lib: FindFirstUrlCacheEntryEx (and "next") along with FindFirstUrlCacheGroup (and next). They didn't seem to return any data, and from what I read, these only return my application's use of the wininet api calls cache -- not I.E.'s.
I also tried running through the list of COM calls that IE made into my app when the file was opened to see what interfaces it was seeing if I supported. One that looked promising was the IMonikerProp interface, which, when I implemented it, did get called... however it only provided me with the mime type property, the classid of my app and the TrustedDownload flag.
Maybe this site has the answer: How SharePoint communicates with Word via ActiveX
Another option could be to hook into the SharePoint ItemCheckingOut event. Example 1 Example 2 . In the event you could get the URL info and create some temporary file with the info or pass the info off to your program.
Link to ActiveX control info - Maybe this control is launched on everything? You might be able to tap into that.
Does the temporary file created have a good name (does it match what the file is actually called or is it gibberish). If it's a good name, you can probably search for it. Otherwise, without knowing the site, folder, filename, you might not be able to unless there is some additional data about the file somewhere.

Resources