I have a page with this url : example.com/someurl.
For some reason, I made a 301 redirection on it to example.com/some-url
But I would like to make sure that ancient datas for the first url will be associated with my new url.
Because if I export my datas, I'll have two entries which means two different pages, but actually this is only one page.
Thank you :)
Alas this doesn't work to well. Your best chance would be to use virtual pageview and pass in the old url to the pageview tracking on the new page:
ga("send","pageview","/some-url");
Alternatively you can create a filter in your GA view settings that rewrites the new url to the old url.
This will work for newly incoming hits. Data that's already connected will not change, and you cannot really consolidate after the fact.
You can use filter in settings
Filter Type: Search and Replace
field request URI
Search String: /some-url
Replace String: /someurl
And its merge wtih historical data
Related
I'm looking for information about how the Live Url (Absolute Url on the back end) regenerates and what triggers it to update.
Using Kentico 12SP MVC I have a pretty normal NewsArticle page type that uses a custom url pattern of "/news/{% UrlSlug %}" to route to an article. It was previously using AliasPath but because the content editors wanted the ability to create slugs that would be longer than the 50 character limit we created a custom field for it.
On any page that I create from scratch and many newer pages that I've edited this works out just fine and changing the UrlSlug to the desired (very long) slug updates the url. On a huge number of older articles though it appears that changing the UrlSlug has no effect on the Live Url. On many the url has changed to just "/news/" and others it's still showing as the old url (based on NodeAlias). I can still route to the page by hand typing the UrlSlug based url, but I've been using the TreeNode.AbsoluteUrl which is based on the Live Url (afaik) to generate menus and sitemap items and those are all still refusing to update on a large portion of our articles.
Hopefully someone knows how to force them to all regenerate or at least has a clue why some would be working and others not.
The "Live URL" displayed on the Page "General" tab is sourced from CMS.DocumentEngine.DocumentURLProvider.GetAbsoluteLiveSiteURL(TreeNode node);
Eventually that calls out to DocumentURLProvider.GetUrlInternal(TreeNode node);
You can override this with a custom DocumentURLProvider by registering a custom provider.
This would let you call the base.GetUrlInternal(node) and see what that is returning.
One conditional that is checked in the original DocumentURLProvider is NodeIsContentOnly, which is in the CMS_Tree table.
So I would check and make sure that all the pages with issues have this set to true (1 in the db column), otherwise the traditional Portal Engine Live URL generation takes effect.
At no point is there any 'regeneration' of Live URLs. What is displayed is coming from the values of the Page Type configuration (URL Pattern), the Node in the db, and the value populating the Macro Expression in your URL Pattern.
in order to open a document with an XPages, we have to call an url with following format :
http://server/database/name_of_xpage.xsp&documentID=xxxx
In one of my databases, the documents to open contain a "title" field.
I'd like to open the document by using an url like this : http://server/database/title_value
How to force the domino server to answer to such an url and to open the related XPage (like it's working on stackoverflow web site)
There are few options:
administrative solution - you can configure Domino to translate URLs at server level
XAgent, Form opening script, LotusScript agent - to redirect to correct URL
Form property to redirect to XPage, described below.
All you need is to make a view with short name or alias, "key" for example. First sorted column should contain your key value. Form property of every document should define XPage to open on web. Then url like this http://server/database/key/title_value will work. With one small caveat.
Create a web site substitution rule to redirect http://server/database/* to
http://server/database/yourxpage.xsp?openPage=*
You could create an XAgent (I called mine "open"), and take in smaller parameters to open the document. For example, let's say your main XPage "form" is called "xpDoc". Here is your XAgent (code in afterRenderResponse):
var val = context.getUrlParameter('title');
var nd:NotesDocument = database.getView('viewname').getDocumentByKey(val);
context.redirectToPage('xpDoc.xsp?documentId=' + nd.getUniversalID() + '?OpenDocument');
So, using this simple XAgent, you can use URLs to open documents, like so:
http://server/database/open.xsp?title=title_value
I just tried it out in a development db I have, and it seems to work pretty well. You can always make the XAgent name and "title" parameter smaller, to make the link smaller.
Take note that with this option, you won't need to update the NAB with any website rules. Since you want to link to documents, I'm assuming that you have more than a handful of documents in your application. Adding website rules in the NAB, I don't think, would be a good option as this would add a lot of extra maintenance. With the above method, everything can be done within your application.
Is it possible to hide the URL of the action of opening or editing a document in a XPage?
I am trying to avoid this:
http(example)://notesdev1.my_company.com/po/po.nsf/%24%24OpenDominoDocument.xsp?databaseName=CN=My_Company_NotesDev1/O=HCI!!PO%5CPO-data.nsf&documentId=E879C68A9A88F6DD87257BC6005A0748&action=editDocument"
I don't think you can use site documents for URL's that open specific documents. I started out customizing the 'Default Action' and 'Document id' of the Document data source. I then switched and tried rebuilding the URL in the beforePageLoad event, and using context.redirectToPage but it still shows the long URL.
I would like to know if I can have control over the entire URL, and still direct pages as I see fit. I know that I can't stop a user from Bookmarking, but if I can control the URL, I can prevent the user from bookmarking intermediate steps in a wizard, and also avoid ugly URLs.
You can use a website document to mask those long urls. Your short url needs to have the DocId visible and then you can map it by a substitution rule.
The other option is to compute your data source. You set it to ignore request parameters and use your own:
http://yourserver/some.nsf/thexpage.xsp?doc=unid
Then use the context to retrieve the Unid and compute
I've read a bit on the matter of friendly urls and I'm a little unsure as to what is better.
I currently have my website using a structure of http://www.domain.com/page.php?id=2
I am using the record id to determine the content of the page. My record id's are numeric and increment for new pages added. The content of existing pages can change completely over time. But, still use the same record id (this is a cms so the client may do this).
The way I understand it I have two options for friendly urls:
http://www.domain.com/page/2
http://www.domain.com/some-text-describing-the-page
Now because I identify the content by the record id, I would assume the first option would make more sense.
My client seems to want option two.
After some reading I found two conflicting points.
As per Tim Berners-Lee (the architect of the WWW) he states that you want a URI which will have the potential to remain the same 2 months, 2 years, 200 years from now. So you DO NOT want to use a page title or something similar for your pages. If you change your pages content you are either forced to change the content and leave the URI alone, or change the URI and are stuck with dangling links. You can read his article here (http://www.w3.org/Provider/Style/URI)
However, a number of other people on the internet (with no know authority to me) clearly state that you need to have a descriptive yet short URI for the best SEO value. From what I read, mostly for the purpose of backlinks and having keywords in the anchor text since people just use the link itself for the anchor text. So having keywords in the link itself helps search engines know what the link is about without a custom title.
It seems to me the difference has to do with long term VS short term.
Am I grasping this correctly?
If I am to use a slug style URI as defined by the user, do I have to just allow my user to type in whatever they want to a field and check against the current database to see if it exist? If so, am I supposed to anticipate static links by running a query for the know record id and then use the result to generate the url which would just be rewritten back to the format: http://www.domain.com/page.php?id=2?
It seems to me that would be a lot of extra overhead.
I would suggest something in the middle of those two:
http://www.domain.com/page/2/some-text-describing-the-page
or without page:
http://www.domain.com/2/some-text-describing-the-page
You can still get page Id from the Url, and there is a title as well! And what even more important, you're still able to get correct content, even when page title change later.
So think about situation like that: User creates a page, it receives Id=4 and it's title is My great title. From that information Url is generated, and is e.g. http://www.domain.com/page/3/my-great-title. After 2 months user changes the title to This title is better then the last one!. Url changes as well to http://www.domain.com/page/3/this-title-is-better-then-the-last-one. However, there is still 3 within the Url, so you're able to show right content! You can also check, if the rest of Url is actual, and redirect (301 would be the best one) to new one to let search engines know, that Url changed.
With a lot of search engines, you can find the string you are searching in the URL.
However, http://drugcompare.destinationrx.com/Home.aspx does not let me do this. When I search something, the resulting URL is http://drugcompare.destinationrx.com/DrugCompare.aspx no matter what.
Is there any way I can find out whether I can search the website by adding something to the end of the URL, like "?query=searchstring" instead of using the form provided on the page? Basically I need a unique URL.
that website you pointed at uses POST to send data for its search query which means you wont be able to see or append it on the URL bar. The reason for that is either for security or the search query it generates is a complex object or too long and does not fit in a url. websites such as search engines uses GET, with that you can append your search query in the url by following the syntax it generates.