Getting URL for item using GetSiteData in Sharepoint - sharepoint

I'm using web.Site.MakeFullUrl((new SPFieldLookupValue(row["FileRef"] as string).LookupValue)) to get the URL to results in a GetSiteData query.
For some items this works fine, but for others I get results like
http://server/Lists/My%20Message%20Board/Test/9_.000 - which always 404s. The urls always end in n_.0000.
Does anybody know why this is happening and how to get the correct URL?

The items generating the weird url are not items in a Document Library that have a file associated with the actual SPListItem. The "normal" urls are urls to files in a doc lib, the weird ones are urls to items in a regular list. Just check for the type of item in the webpart / control / xsl to render the results.
If it is an item from a regular list (with the weird url), just replace it and make the url look like so:
http://server/Lists/My%20Message%20Board/Test/9_.000 should be:
http://server/Lists/My%20Message%20Board/Test/AllItems.aspx?ID=ITEMID

The advice above, by Colin, seems sound except for the fact that the 'good' url should reference Dispform.aspx. For example,
The URL,
http://server/Lists/My%20List/2%5F.000
Should be formatted like this:
http://server/Lists/My%20List/Dispform.aspx?ID=2

Related

Getting thumbnails in OpenSearchServer search results

I need an alternative to Google Custom Search for a website I look after, it has to be something that will crawl a website, index it, allow fiddling of priorities, and then allow search queries via REST or something similar and return XML or JSON etc. It needs to run on a Windows Server instance.
So, I'm up and running with http://www.opensearchserver.com/ and it seems to do the trick, but can't, for the life of me, work out how to get thumbnail images in the results? I've searched the documentation and read everything I could, but can't find out how to do this (or how to get my head around it).
I'm crawling standard web pages and they all have thumbnail meta data, which I'm assuming should be able to be parsed somehow for results and included in the JSON results?
Any pointers at all would be very helpful, thanks!
I figured this out, in case anyone else is struggling, here's how I did it. The answer is in the documentations, it's just not that simple.
Read: http://www.opensearchserver.com/documentation/faq/crawling/how_to_extract_specific_information_from_web_pages.md - it contains the method
Assume you set up a 'web crawler' index.
Assuming you're using a meta thumbnail like this:
<meta name="thumbnail" content="http://my_cdn.com/news/images/29637.jpg">
Go into Schema / Fields. Add a new field called 'thumbnail' with index no, store yes, vector no, analyser Text, copy of blank. Save that.
Now go to schema / parser list, edit HTML parser. Go to 'field mapping', now add a new regex for the thumbnail in the html. We map from the 'htmlSource' to the thumbnail' with the matching regex.
My imperfect regex (that works though) is:
htmlSource -> linked in: thumbnail -> captured by:
(?s)<meta name="thumbnail" content="(.*?)">
Now SAVE this and go to crawl/manual crawl, enter a url that has a thumbnail and then check if the field now appears in the list below when it's read. If not check your regex, and check you actually saved the HTML Parser changes.
To get the thumb in your results, simply add the fieldname to the JSON you send with the query:
"returnedFields": [ "
"url",
"thumbnail"
],

Associate 2 pages' analytics datas

I have a page with this url : example.com/someurl.
For some reason, I made a 301 redirection on it to example.com/some-url
But I would like to make sure that ancient datas for the first url will be associated with my new url.
Because if I export my datas, I'll have two entries which means two different pages, but actually this is only one page.
Thank you :)
Alas this doesn't work to well. Your best chance would be to use virtual pageview and pass in the old url to the pageview tracking on the new page:
ga("send","pageview","/some-url");
Alternatively you can create a filter in your GA view settings that rewrites the new url to the old url.
This will work for newly incoming hits. Data that's already connected will not change, and you cannot really consolidate after the fact.
You can use filter in settings
Filter Type: Search and Replace
field request URI
Search String: /some-url
Replace String: /someurl
And its merge wtih historical data

Friendly URLs when using a Record ID for dynamic content

I've read a bit on the matter of friendly urls and I'm a little unsure as to what is better.
I currently have my website using a structure of http://www.domain.com/page.php?id=2
I am using the record id to determine the content of the page. My record id's are numeric and increment for new pages added. The content of existing pages can change completely over time. But, still use the same record id (this is a cms so the client may do this).
The way I understand it I have two options for friendly urls:
http://www.domain.com/page/2
http://www.domain.com/some-text-describing-the-page
Now because I identify the content by the record id, I would assume the first option would make more sense.
My client seems to want option two.
After some reading I found two conflicting points.
As per Tim Berners-Lee (the architect of the WWW) he states that you want a URI which will have the potential to remain the same 2 months, 2 years, 200 years from now. So you DO NOT want to use a page title or something similar for your pages. If you change your pages content you are either forced to change the content and leave the URI alone, or change the URI and are stuck with dangling links. You can read his article here (http://www.w3.org/Provider/Style/URI)
However, a number of other people on the internet (with no know authority to me) clearly state that you need to have a descriptive yet short URI for the best SEO value. From what I read, mostly for the purpose of backlinks and having keywords in the anchor text since people just use the link itself for the anchor text. So having keywords in the link itself helps search engines know what the link is about without a custom title.
It seems to me the difference has to do with long term VS short term.
Am I grasping this correctly?
If I am to use a slug style URI as defined by the user, do I have to just allow my user to type in whatever they want to a field and check against the current database to see if it exist? If so, am I supposed to anticipate static links by running a query for the know record id and then use the result to generate the url which would just be rewritten back to the format: http://www.domain.com/page.php?id=2?
It seems to me that would be a lot of extra overhead.
I would suggest something in the middle of those two:
http://www.domain.com/page/2/some-text-describing-the-page
or without page:
http://www.domain.com/2/some-text-describing-the-page
You can still get page Id from the Url, and there is a title as well! And what even more important, you're still able to get correct content, even when page title change later.
So think about situation like that: User creates a page, it receives Id=4 and it's title is My great title. From that information Url is generated, and is e.g. http://www.domain.com/page/3/my-great-title. After 2 months user changes the title to This title is better then the last one!. Url changes as well to http://www.domain.com/page/3/this-title-is-better-then-the-last-one. However, there is still 3 within the Url, so you're able to show right content! You can also check, if the rest of Url is actual, and redirect (301 would be the best one) to new one to let search engines know, that Url changed.

How to place search query in the URL?

With a lot of search engines, you can find the string you are searching in the URL.
However, http://drugcompare.destinationrx.com/Home.aspx does not let me do this. When I search something, the resulting URL is http://drugcompare.destinationrx.com/DrugCompare.aspx no matter what.
Is there any way I can find out whether I can search the website by adding something to the end of the URL, like "?query=searchstring" instead of using the form provided on the page? Basically I need a unique URL.
that website you pointed at uses POST to send data for its search query which means you wont be able to see or append it on the URL bar. The reason for that is either for security or the search query it generates is a complex object or too long and does not fit in a url. websites such as search engines uses GET, with that you can append your search query in the url by following the syntax it generates.

List Schema - URL Syntax

I ran acrossed this a couple months ago and did not save the link anywhere, unfortunately.
Basically, there is a URl syntax to extract a Sharepoint Lists basic schema that exports it to the browser in XML format. It gives the basic information for the field and views of the list.
Resolution:
http://blogs.msdn.com/b/kaevans/archive/2009/05/01/getting-xml-data-from-a-sharepoint-list-the-easy-way.aspx
You just have to put in the right context of words to get the result you need.
For everyone else's sake:
http://<PATH TO SITE>/_vti_bin/owssvr.dll?Cmd=ExportList&List={GUID}
The list GUID can be found by going to the List Settings, then pulling out the GUID from the URL.
You might want to take a look at this: http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/01/21/introduction-to-querying-lists-with-rest-and-listdata-svc-in-sharepoint-2010.aspx
Not sure if it's what you are after..

Resources