Limit Printing of Web Page - web

I'm in the beginning phases of developing a website. I want to be able to limit the amount of printing of web pages of circulars. These will be in an image format and usually consist of between 2 and 16 web pages. The circulars change each week.
Is there a way to limit the user to only 1 or X number of prints for each page and for each week? Is this easier done with standard web development or can it be done even easier in a content management system such as WordPress?

Not unless you have full control over the client-side.
You can TRY to prevent that SAME computer (via cookie) from
navigating to the same page twice.
If you are giving the user a unique ID to access the circular pages,
you can mark that ID as already having displayed the pages.
But there is simply no way to make sure that the user can't call the print calls in the browser.
One trick, which a js hacker could easily get around.. tie into the page printed event. The answers to that question talk about just how poorly the events are supported, and not cross-browser. If this browser has already had that event fire, nav away or change the #media rule for printing to return css making the whole page display:none (or some trickery).
As far as the actual print dialog ("Copies: x"), there's nothing you can do.

Related

In XPages Mobile App / Mobile Control, how to make picklist

I have two pages, one page for input, another page for the options, how to send value form page to another page on xPages Mobile Controls, or is there another way to make like this.
See my sample page:
1. Page 1:User Input
http://i1248.photobucket.com/albums/hh490/dannysumarnach/page_1_form_user_input.jpg
Page 2:Picklist
http://i1248.photobucket.com/albums/hh490/dannysumarnach/page_2_user_choice_PickList.jpg
note: the built-in typeahead not posible
Regards,
Danny
The in built type ahead is missing the dojo tundra.css file when using the single page app control. This file comes with Dojo its just not being included. Import this file to get the type ahead to work.
I'm unsure as to what you mean about passing value from one page to another, you can submit data to a document and open it in another page, add it to a scoped variable, add a parameter to the URL. All of these options will work.
Have a look at my blog post on this topic. There are a couple of gotchas to get around, most notably, ensuring the the page with your document datasource gets recalculated at the correct time. I'm working on a NotesIn9 on it.
Part 3 covers a couple of amendments to get it working with existing documents and includes a sample page that will work in the Extension Library Demo db. Note the extra view that needs to be created and other details in Part Two.
http://www.intec.co.uk/xpages-mobile-controls-and-value-pickers-part-three-client-side-approach-extended/

Web: The system will record the length of time the user displayed each page

I have this requirement:
The system will record the length of time the user displayed each page.
While trivial in a rich-client app, I have no idea how people usually go about tracking this.
Edited: By John Hartsock
I have always been curious about this and It seems to me that this could be possible with the use of document.onunload events, to accurately caputure star and stop times for all pages. Basically as long as a user stays on your site you will always be able to get the start and stop time for each page except the last one. Here is the scenario.
User enters your site. -- I have a
start time for the home page
User goes to page 2 of your site -- I have a stop time for the home page and a start time for page 2
User exits your site. -- How do you get the final stop time for page 2
The question becomes is it possible to track when a user closes the window or navigates away from your site? Would it be possible to use the onunload events? If not, then what are some other possibilities? Clearly AJAX would be one route, but what are some other routes?
I don't think you can capture every single page viewing, but I think you might be able to capture enough information to be worthwhile for analysis of website usage.
Create a database table with columns for: web page name, user name, start time, and end time.
On page load, INSERT a record into the table containing data for the first three fields. Return the ID of that record for future use.
On any navigation, UPDATE the record in the navigation event handler, using the ID returned earlier.
You will end up with a lot more records with start times than records with both start and end time. But, you can do these analyses from this simple data:
You can count the number of visits to each page by counting start times.
You can calculate the length of time the user displayed each page for the records that have both start and end time.
If you have other information about users, such as roles or locations, you can do more analysis of page viewing. For example, if you know roles, you can see which roles use which pages the most.
It is possible that your data will be distorted by the fact that some pages are abandoned more often than others.
However, you certainly can try to capture this data and see how reasonable it appears. Sometimes in the real world, we have to make due with less than perfect information. But that may be enough.
Edit: Either of these approaches might meet your needs.
1) Here's the HTML portion of an Ajax solution. It's from this page, which has PHP code for recording the information in a text file -- easy enough to change to writing to a database if you wish.
<html>
<head>
<title>Duration Logging Demo</title>
<script type=”text/javascript”>
var oRequest;
var tstart = new Date();
// ooooo, ajax. ooooooo …
if(window.XMLHttpRequest)
oRequest = new XMLHttpRequest();
else if(window.ActiveXObject)
oRequest = new ActiveXObject(“Microsoft.XMLHTTP”);
function sendAReq(sendStr)
// a generic function to send away any data to the server
// specifically ‘logtimefile.php’ in this case
{
oRequest.open(“POST”, “logtimefile.php”, true); //this is where the stuff is going
oRequest.setRequestHeader(“Content-Type”, “application/x-www-form-urlencoded”);
oRequest.send(sendStr);
}
function calcTime()
{
var tend = new Date();
var totTime = (tend.getTime() – tstart.getTime())/1000;
msg = “[URL:" location.href "] Time Spent: ” totTime ” seconds”;
sendAReq(‘tmsg=’ msg);
}
</script>
</head>
<body onbeforeunload=”javascript:calcTime();”>
Hi, navigate away from this page or Refresh this page to find the time you spent seeing
this page in a log file in the server.
</body>
</html>
2) Another fellow proposes creating a timer in Page_Load. Write the initial database record at that point. Then, on the timer's Elapsed event, do an update of that record. Do a final update in onbeforeunload. Then, if for some reason you miss the very last onbeforeunload event, at least you will have recorded most of the time the user spent on the page (depending upon the timer Interval). Of course, this solution will be relatively resource-intensive if you update every second and have hundreds or thousands of concurrent users. So, you could make it configurable that this feature be turned on and off for the application.
This has to be done with some javascript. As the other said, it is not completely reliable. But you should be able to get more than enough accurate data.
This will need to call your server from javascript code when the page is unloaded. The javascript event to hook is window.unload. Or you can use a nicer API, like jQuery. Or you could use a ready made solution, like WebTrends, or Google Analytics. I think that both record the length of time that the page was displayed.
Good web analytics is pretty hard. And it becomes harder if you have to manage a lot of traffic. You should try to find an existing solution and not reinvent your own ...
I've put some work into a small JavaScript library that times how long a user is on a web page. It has the added benefit of more accurately (not perfectly, though) tracking how long a user is actually interacting with the page. It ignores time that a user switches to different tabs, goes idle, minimizes the browser, etc. The Google Analytics method suggested has the shortcoming (as I understand it) that it only checks when a new request is handled by your domain. It compares the previous request time against the new request time, and calls that the 'time spent on your web page'. It doesn't actually know if someone is viewing your page, has minimized the browser, has switched tabs to 3 different web pages since last loading your page, etc.
As multiple others have mentioned, no solution is perfect. But hopefully this one provides value, too.
Edit: I have updated the example to include the current API usage.
http://timemejs.com
An example of its usage:
Include in your page:
<script src="http://timemejs.com/timeme.min.js"></script>
<script type="text/javascript">
TimeMe.initialize({
currentPageName: "home-page", // page name
idleTimeoutInSeconds: 15 // time before user considered idle
});
</script>
If you want to report the times yourself to your backend:
xmlhttp=new XMLHttpRequest();
xmlhttp.open("POST","ENTER_URL_HERE",true);
xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
var timeSpentOnPage = TimeMe.getTimeOnCurrentPageInSeconds();
xmlhttp.send(timeSpentOnPage);
TimeMe.js also supports sending timing data via websockets, so you don't have to try to force a full http request into the document.onbeforeunload event.
In a web based system, there's no way to reliably do this. Sure, you can record each page that a user displays and record the length of time between each view but what happens when they close the browser on the last page they're displaying on? That's just one of dozens of problems with this requirement.
What about an AJAX based approach? It would only work when Javascript is on the client side, but sending a POST to some script every 15 seconds will get you a reasonable amount of granularity.
There are also more complicated "reverse-AJAX" things you might be able to do... but I don't know much about them.
You can use onunload to do what you need. Have it send a AJAX request to your server to update a database. You may want to return false then do document.close once the AJAX request has completed such that it won't quit prematurely and the ajax won't get discarded.
In the database you'll just want to store the page, the ip address, the time of the event, and whether it was a onload or onunload event.
That is all there is too it.
I recently made a example of recording html page spent time.
refresh would not interrupt the recording, and close would
I use sessionStorage to sotre "time" that page spent
if refresh
I would put it in to sessionStorage
if close
I can not get it from sessionSotrage , so I set time=0
here is my code
`
<body>
time spent :<div id="txt"></div>
</body>
<script>
$(function () {
statisticsStay();
})
function statisticsStay(){
var second=0;
if(sessionStorage.getItem('testSecond')!=null)
second=sessionStorage.getItem('testSecond');
var timer = setInterval(function(){
second++;
document.getElementById('txt').innerHTML=second;
},1000);
window.onbeforeunload = function(){
sessionStorage.setItem('testSecond',second);
};
}
</script>
`

What is window message for a loaded website in Internet Explorer?

I am currently have this message handler line:
MESSAGE_HANDLER(`WM_SETREDRAW`, onSetRedraw)
I would like to know, is there any window message (eg: WM_???) that is connected/related to 'when a website has finish loading inside IE' ?
So I can use it to replace the above WM_SETREDRAW. I want to do something like, when the IE finish loaded a website, it call onSetRedraw.
If no one answers, go Gogoling for an application "spy" tool, which will tell you which messages your program receives. Make a one line app which one launches the browser and spy on that.
Alternatively, what API are you using to launch the browser? Look at it's return value.
Btw, I strongly suspect that you will only get a message when the browser is launched, not every time it loads a new page (or even the first page).
You may not be able to do what you want very easily. A possibility might be to search for the window by title bar, get it's handle, walk its control list until you get to the status bar and check its text in a loop until it is done.
A further possibility, if this is only for yourself, woudl be to get an open source browser which uses the MSIE rendering engine and make a one line change at "the right place in the code" to send a message to your app every time a new page is loaded.

How does the Back button in a web browser work?

I searched the Web about this question but I found nothing:
What is the logic of the back button? What is happening when we hit the back button on a Web browser?
I really would like to understand more about that.
Your web browser keeps a stack (or list, if you will) of the web pages that you have visited in that window. Let's say your home page is search.example and from there you visit a few other websites: video.example, portal.example, and news.example. Upon visiting the last one, the list looks like this:
search.example -> video.example -> portal.example -> news.example
^
|
current page
When you press the Back button, the browser takes you back to the previous page in the list, like this:
search.example -> video.example -> portal.example -> news.example
^
|
current page
At this point you can press Back again to take you to video.example, or you can press Forward to put you at news.example again. Let's say you press Back a second time:
search.example -> video.example -> portal.example -> news.example
^
|
current page
If you now go to, say, example.com, the list changes to look like this:
search.example -> video.example -> example.com
^
|
current page
Note that both portal.example and news.example are gone from the list. This is because you took a new route. The browser only maintains a list the pages you visited to get to where you are now, not a history of every page you've ever been to. The browser also doesn't know anything about the structure of the site you're visiting, which can lead to some surprising behavior.
You're on a shopping site (shop.example, as a short example) that has categories and subcategories of products to browse through. The site designer has thoughtfully provided breadcrumbs near the top of the window to allow you to navigate through the categories. You start at the top page of the site, click on Hardware, then Memory. The list now looks like this:
search.example -> shop.example -> shop.example/hw -> shop.example/hw/mem
^
|
current page
You want to go back to the Hardware category, so you use the breadcrumbs to go up to the parent category instead of using the Back button. Now the browser list looks like this:
search.example -> shop.example -> shop.example/hw -> shop.example/hw/mem -> shop.example/hw
^
|
current page
According to the site structure, you went backward (up a level), but to the browser you went forward because you clicked on a link. Any time you click on a link or type in a URL in the address bar, you are going forward as far as the browser is concerned, whether or not that link takes you to a page that you've already been to.
Finally, you want to return to the main site page (shop.example). You could use the breadcrumbs, but this time you click the Back button -- it seems obvious that it should take you up one level, right? But where does it take you?
It's initially confusing to many users (myself included, when I happen to do exactly this) that it takes you "down" a level, back to the Memory category. Looking at the list of pages, it's easy to see why:
search.example -> shop.example -> shop.example/hw -> shop.example/hw/mem -> shop.example/hw
^
|
current page
To go back to the main page using only the Back button would require two more presses, taking you "back" to the Hardware category and finally to the main page. It seems so obvious to us programmers what's going on, but it surprises the heck out of regular users all the time because they don't realize that the browser doesn't know anything about the hierarchical structure of whatever website they happen to be on.
Would it be great if browsers would let site designers program the Back button to do the obvious thing (take you up a level) rather than whatever it does now?
A commenter asked whether the browser reloads the page or simply displays it out of its local cache.
The answer is it depends. Site designers can specify whether the browser should cache the page or not. For pages that are set as non-cached, the browser reloads the page from the server when you press Back, as though it was the first time you are visiting it. For cached pages, the browser displays it out of the cache, which is much faster.
I like to think of it as re-issuing my last request. If you performed a simple GET, it would probably return the same thing it did last time (minus dynamic content). If you had done a POST, you're going to resubmit the form (after confirmation) to the server.
I think the easiest way to explain this is in pseudocode:
class Page:
String url, ...
Page previous, next # implements a doubly-linked list
class History:
Page current # current page
void back():
if current.previous == null:
return
current = current.previous
refresh()
void forward():
if current.next == null:
return
current = current.next
refresh()
void loadPage(Page newPage):
newPage.previous = current
current.next = newPage # remove all the future pages
current = current.next
display(current)
The basic idea is to return to the last page or logical site division.
Looking at Gmail you'll see if you do a search and click a message then hit the back button it will take you back to the search that you did.
When you click it in most browsers it will either resend the last http request or will load a cache if the browser caches sites.
A history of pages viewed is kept in a stack-like form. When you "pop" the top three pages (A, B, C, for instance) and then go to a different page D, you cannot get to B again by hitting forward.
As a devoloper, you should make sure that your webapp works no matter how the browser handles the Back button :-) Does it resend the request? Is the new request identical to the old one, or does it differ in any way? Will browser ask user to confirm re-POST? What elements of the page will be re-requested and what loaded from cache? Will browser respect my cache-control headers?
Answers to these question depend on make, version of a browser and user settings. Design you software so that all this doesn’t matter that much.
Sorry for not very direct answer, but there are some straight answers here already.
a browser always stored the pages for its remembering and when we press the back button
it doesn't send the request to server for the previous page instead it just see its cache
where it stored the pages and it follow the LIFO rule that is why it give us that page first
on pressing the back button which we opened in the last
There is something I want to add as a complement.
When you hit the back button in your browser, or(alt+left) in chrome, the browser actually just loads the cached HTML file in the history.
it doesn't send another GET request to the server,
so when you go back in some ecommerce website and pass the password again it will throw exception to you.
it's true some web pages do not allow you to cache itself but that's rare, and in that case or the cache has expired, the browser will send the GET request instead of using the HTML from the cache.
The browser loads the last viewed page before the current one, and then follows any redirection that might happen?
I kind of seem to be missing the point of the question.

MOSS'07 - Page View Web Part Slows Menu Hovers

In our MOSS '07 site we have a page that contains just a Page Viewer web part in it that points to a site on another server. However, I've noticed that on that page (and any others that have a Page Viewer web part on it) our drop down menus and hover effects are super slow and completely max out the CPU on the visitor's computer (process is IExplorer.)
Through testing, I was able to determine that it doesn't matter what URL the web part is pointed to...just having the Iframe on the page seems to cause it (just setting the viewer to load Google's homepage--which is probably the simplest site I know--still causes the problem). If I go and remove the web part, the menus start functioning just fine again.
I attached a debugger to the process and stepped through the Menu_HoverStatic and called functions and it seems to have a hard time when assigning panel.scrollTop to zero in the PopOut_Show function.
Has anyone else noticed this? ...perhaps found a solution to it? I can't find where to edit PopOut_Show function on our server (I think it's a resource in one of the .NET DLLs) or else I'd just comment out that line as I don't think it's really important anyway...at least on our site.
I really like the ability to have web pages from another server hosted in our SharePoint site, but the performance on the hovers is agonizing... and, honestly, unacceptable. Depending on the resources of the user's computer, the hover effects can take 15 seconds to complete at times!!!!
Any suggestions would be really appreciated!
SharePoint's built-in JavaScript is probably making the browser wait until the IFrame within the Page Viewer Web Part has completely loaded. If you can see a status bar message that says "Please wait while scripts are loaded..." when you attempt to click on the page then that's definitely the problem.
Thank you for your reply. I was actually able to discover what the problem was (my appologies for not sharing it here with everyone when I did!)
The problem wasn't so much from having the IFRAME on the page, it was because I had set the zone to be 100% width and height. Because of a but in IE, trying to calculate the location of the dropdown was erroring (I don't remember what javascript function or call was exactly to blame, but I remember stepping through it with the debugger.) I believe it had something to do with "location offset" or something like that. My take at the time was that it was trying to position the dropdown menu on the screen, and the calculation for positioning it was failing.
To get around it, I had to set a javascript routine to programmatically set the height of the zone after the page loaded. Exactly setting the height prevented the dropdown problem in the menus. Of course, it wasn't ideal because if a user resizes the window, the IFRAME (or, more precisely, the zone it's in) doesn't resize with it. But, it was a suitable band-aid for the problem.
I'm hoping that IE 8 will fix this when it's released.

Resources