In my Google Analytics stats i get two lines related to one url like with different stats
/my-page | xxx views
/my-page/ | xxxx views
So the question is, are they counted as different pages and statistics of each link is completely unique and does not affect views of the other one, or when visiting first link, views of the second one also increments and vice versa?
Also is this normal behavior or i can somehow reduce stats only to one url? For the first one, for example.
Analytics treats them as two different pages, so the stats are independent of each other.
To consider the aggregate statistics in Analytics you can make a filter that includes those 2 pages and look at the row of totals.
Related
I'm creating three approved software lists for my company with SharePoint. One is the general list for all associates the next is the restricted list which will contain software like wireshark that only certain people should have access to and the last is the master list which will be a combination of the other two lists.
What would be ideal is being able to add the software to the master list and have it update the other two lists automagically. The unique key will of course be the software title. The field that will determine which list the row will be added to is the the [group] field. (This is where the uncertainty comes in) There will be 4 values that can go into this [group] field they are: restricted, general, engineering, media.
I would like to have the rows with "restricted" go to the restricted list, obviously, and everything else go to the general list.
I'm very new to SharePoint (~1 week) and I'm trying to simplify this process as much as possible. I'm continuing to read and watch the videos to lean more however, I understand this is a complex application. I thought I'd pose this question to people with more experience than myself to find if it's even possible. If not I'll be able to change my train of thought sooner.
Thank you for your time
This is probably a question for https://sharepoint.stackexchange.com/
But -- what I would do in your situation is only use 1 list and make multiple views.
Each view can be filtered by a different criteria (like your group column in this case) then instead of having 3 distinct lists, you can display or have a link for each view (they all get their own URI in SharePoint) seperately.
This way you only ever have to update 1 list, and you avoid the overhead/complexity of trying to copy into other lists with event recievers or workflows or something else.
If someone reading this needs instructions on views:
You can create/switch views from the 'List' or 'Library' tab when you're viewing the list. Then when you add the list to a web part page, you can select which view to use in the web part properties window.
My program currently goes through pages of a website gathering information. How do I set my loop to end when I have visited all the websites pages?
Is there some way of knowing the amount of webpages in any site?
Or do I have compare a block of pages I have visited eg 10 and if the pages are checked in that order again i know its repeating itself.
I'm sure there has to be a better way of knowing when to stop.
Keep track of pages visited ( may be keeping visited URL in a set) and when trying to scan a new page, check if it is already visited.
Breadth first search
Depth first search
Check these two algorithms. Think of the site as a graph
whose nodes are the pages and whose edges/arcs are the links
from one page to another. So two pages are neighboring
A → B, if there's a link from page A to page B.
Then just implement one of these two algorithms
(whichever you find more appropriate for your case).
Both of them have their respective stop conditions.
Your search in both cases should start with the root
page(s) which is usually default.ext or index.ext or
something similar (ext = html, asp, aspx, jsp, php, whatever).
You may want to pre-process the website with a SitemapGenerator and only visit the webpages included in the sitemap.
Is there some way of knowing the amount of webpages in any site
No. All you can do to examine a web-site is to make HTTP GET (or HEAD) requests and examine the response. That will tell you whether the URI is a valid identifier for a resource, and get you a representation of that resource. You can not know which requests will indicate a valid resource, nor can you practically generate all the possible URIs to perform an exhaustive search.
At best, all you can do is to start with a URI and find all the resources reachable from that URI, by examining resources that contain links to other resources, and then following those links.
I'd like a list of the top 100,000 domain names sorted by the number of distinct, public web pages.
The list could look something like this
Domain Name 100,000,000 pages
Domain Name 99,000,000 pages
Domain Name 98,000,000 pages
...
I don't want to know which domains are the most popular. I want to know which domains have the highest number of distinct, publicly accessible web pages.
I wasn't able to find such a list in Google. I assume Quantcast, Google or Alexa would know, but have they published such a list?
For a given domain, e.g. yahoo.com you can google-search site:yahoo.com; at the top of the results it says "About 141,000,000 results (0.41 seconds)". This includes subdomains like www.yahoo.com, and it.yahoo.com.
Note also that some websites generate pages on the fly, so they might, in fact, have infinite "pages". A given page will be calculated when asked for, and forgotten as soon as it is sent. Each can have a link to the next page. Since many websites compose their pages on the fly, there is no real difference (except that there are infinite pages, which you can't find out unless you ask for them all).
Keep in mind a few things:
Many websites generate pages dynamically, leaving a potentially infinite number of pages.
Pages are often behind security barriers.
Very few companies are interested in announcing how much information they maintain.
Indexes go out of date as they're created.
What I would be inclined to do for specific answers is mirror the sites of interest using wget and count the pages.
wget -m --wait=9 --limit-rate=10K http://domain.test
Keep it slow, so that the company doesn't recognize you as a Denial of Service attack.
Most search engines will allow you to search their index by site, as well, though the information on result pages might be confusing for more than a rough order of magnitude and there's no way to know how much they've indexed.
I don't see where they keep or have access to the database at a glance, but down the search engine path, you might also be interested in the Seeks and YaCy search engine projects.
The only organization I can think of that might (a) have the information easily available and (b) be friendly and transparent enough to want to share it would be the folks at The Internet Archive. Since they've been archiving the web with their Wayback Machine for a long time and are big on transparency, they might be a reasonable starting point.
In my new project, I have used a lot of Content Query Webparts (CQWP) and then I found that the site was becoming slower and slower when visited with the increasing number of CQWPs.The question I want to ask is:
Does a CQWP take a lot of server resources that make the site slow for visitors?
If I want to query the lists and customize the style of output then can I do it without a CQWP?
Take a look at this link may be you have to use custom XSLT with function to filter the output of CQWP.
http://blog.mastykarz.nl/extending-content-query-web-part-xslt-custom-functions/
For your first question
My answer is : It depends on the several things not only on the number of CQWP on the page.
Let me explain :
The CQWP has so many things to do like fetching data from the List which may be Sharepoint list or custom list ,the resource utilization depends on the logic applied to fetch the data from the list , by saying this I mean the amount of data to be fetched and the Logic complexity to get that data also make sense for the server resource utilization..
For example, If you have class which perform complex logic to get the data like comaprision , if else condition and ForEach Loops and the amount of data avaliable in list is large then it is obvious that it will take more resources from the server.
I hope you get my point
For Second question
My Answer is : You can use CQWp or DVWP(Data View Web Part), but be sure when to use which one.
To get more Idea about both of this take a look at this link
http://www.sharepointblog.co.uk/2012/06/data-view-web-part-vs-content-query-web-part/
In the SharePoint publishing site I will have some banners that are Web Parts and can have any HTML content inside them. I have requirement to count clicks on that banners. Banners will have some links to external sites.
I am not sure where to store counters for individual banners. Custom List is the first thing that came to my mind but I am not sure how will it behave in concurrent access. Can I lock list (list item) and do the counter increment ? What will happen for other list access if it is in lock state ? Will it fail or just wait ?
Are there any alternatives to storing counters somewhere else ?
There are lots of places, here are the two most popular:
Property Bag (most likely on the Web) which is a number you increment
Inside a list
Of these, I have successfully done it with a list on our blogging solution, you can see it here: http://community.zevenseas.com/blogs, where I'm tracking views for each post. I took this approach because I like to see more than a number, eg. referrer, ip, etc.
Things to keep in mind:
You need to keep a close eye on the number of items you are storing. SharePoint doesn't like lots of items in a list. To manage them put them in folders, a folder for each banner, and then subfolders for each month.
I would keep a list with each of the banners (just their name or more) in it, then create a second list to store the views. In the list where you store the views have a lookup back to the list storing the banners. On the original banner list you can then create a new column which "Counts" the number of Views related to each banner item.
Again, be very careful about the number of items you are expecting, but this works pretty nicely for us.
Don't forget a small database will allow you to store page hits against whatever you want. You can then call a stored proc and that database "just takes care of it". You don't have to worry about access and concurrency (because you used a transaction riiiight!).
A SharePoint list is easy because they are there out of the box, but consider that they have a lot of overhead for adding values and even reading from. They are also editable by a site administrator, which may be find, depending on the number of administrators you have. A list is easier to provision than a new database, so in the end you do need to consider the two options carefully.
Just because SharePoint has a hammer does not mean everything is a nail :)