Changing page location after google analytics setup - web

The current website structure is setup such that all the ASPX pages are in the main folder. It's becoming increasingly difficult to maintain, so I would like to create new folders and move the relevant pages. This would change the URL from say:
http://mydomain.com/DoStuff.aspx
to
http://mydomain.com/DoingFolder/DoStuff.aspx
I fear that this will skew up the google analytics results. Is it recommended I do this change? If so, is there a way to link the page locations of after and before the change?
Also, what would happen when I implement the URL rewrite? Would I run into the same issue again? Anyone?

So in general I think it is a good idea to add the folder for both your users to visually see the section they are in via the URL and to help the search engines figure out the areas and who knows you may even get a (small) SEO benefit out of it.
What I would advise is to setup a second profile in Analytics and then add a filter which removes the folder name from the request and will leave you with the same flat structure in your reports as you have currently. (NB Do this under a new profile with the same tracking code to avoid major mess-ups that you can't undo).
Cheers
Z

Related

Netsuite Sitebuilder My Reference Checkout 3D secuer

I'm creating a new sitebuilder website, and have successfully set up "My Reference Checkout". However, we haven't been able to get it integrated with 3D secure, meaning that no customers can check out (unless we remove all the security, which we don't want to do).
Looking at netsuite's documentation on SuiteAnswers here, it gives us some very generic pointers about how we have to create some .ss and .js files, and modify some existing front-end code in Custom checkout. However, AFAICT the examples are not useful - they don't tell us where to put these ss & js files, where to modify the front end, and some of the files don't even seen to be related etc.
Does anyone have any better documentation for how to integrate with 3D secure, or please can someone point me in the right direction?
Any help is greatly appreciated!
Thanks!
This is a pretty broad question, and old, so you may have found the answer, but I would recommend reading and searching at this resource: https://devsc.publishpath.com. It uses your netsuite login and password. There is a blog post at this location: http://devsc.publishpath.com/Default.aspx?p=2079436&Add=Show+Post&Key=Show+Post&ContentID=5226425&PostID=1200364&shortcut=developing-your-first-custom-suitecommerce-advanced-module-part-1 that goes through setting up the reference cart if you are using it.
Outside of that here is a brief overview. The SSP and SS pages are setup using a utility in Netsuite under Setup/Site Builder/(Sub-Heading-Web Site Management)SSP applications. Here you set where in the Netsuite file system your SSP and SS pages are. The files themselves should be somewhere in your file structure under your hosting root. They load in order of precedence and have to be deployed. For certain activities (like logging in or checking out) you need to set up touchpoints.

Document Mng on Redmine: Anyone use DMSF plugin or find an easy way to manage docs in the Files tab with a 'wiki' as a front end?

I'm looking to use Redmine for document management. I know that Redmine is not ideal for this task but there is already a lot of content on the site so I'd like to utilize it if possible.
Redmine currently does not a have great documents module. The files we've uploaded look to be amended on that specific page and it doesn't seem to be able to move to another page (unless you download and re-upload to the proper page).
Idea 1
I see there is a Files section, which could work as a central repository (and you can upload document based on release) however, is there a way to set up a nice-looking 'front-end' page that automatically updates based on new submissions to the Files tab? I envision this front end to be a simple wiki page with the document name, a short description and a links to the file posted in the Files tab.
There are so many documents uploaded to varying pages on the Redmine site. I would only do the whole download and re-upload of files if there was a way to automatically update the 'front end' wiki.
Idea 2
I see there is a DMSF plugin for Redmine. Has anyone used this before and has is solved document management issues? I'd like to hear your feedback. Even if DMSF doesn't totally solve my issue, anything is better than what I have now.
Thanks!
In my opinion DMSF module is a perfect companion for Redmine. We have adopted it in our company. You can easily deal with document versions, webdav access, custom approval workflow, document modifications notification with the extra value of being well integrated with Redmine features (roles, dynamic links in Wiki and issue text and notes).

Scraping a website where all the data is locked in an XML database?

I am trying to download the full archive files of this website (http://www.afghanislamicpress.com/).
I tried using DeepVacuum (http://www.hexcat.com/deepvacuum/index.html) but the site is dynamic (I think that's the right word).
So you submit a form that gives the article archive, but it only spits out 5 at a time (i.e. per page) and then you have to click through. I want to download all the individual articles for the full data set, but don't want to manually click through.
I know there's some easy way to do this, but not entirely sure how.
Any suggestions for a novice at doing data scraping etc?
The most straightforward solution would be to contact the owner of the website and request their permission to republish their articles, and ask for a digital copy.
You can certainly automate pulling down content that is paged, but it requires some programming effort. The best tool for that imho is HTML Agility Pack.
Please be sure and comply with copyright and licensing terms of the content you are downloading.

google index - will google index my logs?

I have some txt log files where i print out some important activities for my site.
These files ARE NOT referenced from any link within my site, so it's only me i know the url
(they contain current date in the filname so i have one for each day).
Question: will google index these kind of files?
I think google indexes only the pages whom urls are on the site.
Can you confirm my assumption? I just do not want others to find the link from google etc:)
In theory they shouldn't. If they aren't linked from anywhere they shouldn't be able to find them. However I'm not sure if stuff can make its way into the index by virtue of having the google toolbar installed. Definitely I've had some unexpected stuff turn up in search engines. The only safe way would be to password protect the folder.
Google can not index pages that it doesn't know they exist, so it won't index these, unless someone posts the url's to google, or place them on some website.
If you want to be sure, just disallow indexing for the files (in /robots.txt).
Best practice is to use the robots.txt to prevent the google crawler from indexing files you don't want to show up.
This description from Google Webmaster Tools is very helpful and leads you through the process of creating such a file:
https://support.google.com/webmasters/answer/6062608
edit: As it was pointed out in the comments there is no guarantee that the robots.txt is used so password-protecting the folders is also a good idea.

Software for building a sitemap

If I had to create a content inventory for a website that doesn't have a sitemap, and I do not have access to modify the website, but the site is very large. How can I build a sitemap out of that website without having to browse it entirely ?
I tried with Visio's sitemap builder, but it fails great time.
Let's say for example: I Want to create a sitemap of Stackoverflow.
Do you guys know a software to build it ?
You would have to browse it entirely to search every page for unique links within the site and then put them in an index.
Also for each unique link you find within the site you then need to visit that page and search for more unique links.
You would use a tool such as HtmlAgilityPack to easily grab urls and extract links from them.
I have written an article which touches on the extracting links part of the problem:
http://runtingsproper.blogspot.com/2009/11/easily-extracting-links-from-snippet-of.html
I would register all your pages in a Database, and then just output them all on a page (php - sql). Maybe even indexing software could help you! First of all, just make sure all your pages are linked up and submit it to google still!
Just googled and found this one.
http://www.xml-sitemaps.com/
Looks pretty interesting!
There is a pretty big collection of XML Sitemaps generators (assuming that's what you want to generate -- not a HTML sitemap page or something else?) at http://code.google.com/p/sitemap-generators/wiki/SitemapGenerators
In general, for any larger site, the best solution is really to grab the information directly from the source, for example from the database that powers the site. By doing that you can get the most accurate and up-to-date Sitemap file. If you have to crawl the site to get the URLs for a Sitemap file, it will take quite some time for a larger site and it will load the server during that time (it's like someone visiting all pages in your site). Crawling the site from time to time to determine if there are crawlability issues (such as endless calendars, content hidden through forms, etc) is a good idea, but if you can, it's generally better to get the URLs for the Sitemap file directly.

Resources