Removing Handle keyword from DSpace url - handle

I am not using handler in DSpace 6.1 but still by default I can see handle keyword in URL.Tried tomcat rewrite conditions but getting 404 error because those pages don't exist.I am trying to change it to look like http://mywebiste/123456789/23 instead of http://mywebsite/handle/123456789/23

This is far from a trivial change to make. If you are using XMLUI, the place to look at and alter is sitemap.xmap
https://github.com/DSpace/DSpace/blob/dspace-6_x/dspace-xmlui/src/main/webapp/sitemap.xmap#L257
You will notice there that there are ~10 locations where /handle/ is referenced in the path.
If the underlying problem is that you don't want to register with CNRI for a handle, and that you don't want to be stuck with the generic 123456789 in the url, know that you can also change this prefix into a word, instead of a number.
So for example, you can change 123456789 into "internal", so you have item page urls like:
https://rdmc.nottingham.ac.uk/handle/internal/7006

Related

How to handle urls in load runner?

I have created/recorded a script in Vugen, however the the URL of the site has been changed recently. Is there any way just by replacing the url with a parameter works?
I have tried by replacing url with parameters, the new URL is
http://xsx.xxx.xsx.xxx/test99
Yhe parameters I have tried are below:
NewUrl: http://xsx.xxx.xsx.xxx/
Newhos: test99
I have replaced all in the script and when I run it I get the following error:
Error -27651: Attempted read from an unconnected socket (empty response, no HTTP headers received). URL="http://xsx.xxx.xsx.xxx/scripts/uiServer.dll"
What is the solution for this? Should i record again with the new URL ?
Thanks.
Hope I've understood what you're asking for, so here goes. If it's only the URL that has changed and not the content of the site which you might require later on in your script that this is fairly simple to do.
As you have created the new parameters ensure that they are getting the data from the same DAT file. I.e. newurl.dat which contains the following:
newurl,newhost
http://xsx.xxx.xsx.xxx/,test99
and assign the parameters to the correct column and have the newhost set to sameline as newurl. This way it’s easier to maintain I believe.
Now that the parameters have been created and properly assigned in your script you’ll need to change the url your trying to change from:
http://xsx.xxx.xsx.xxx/oldtest to {newurl}{newhost}
this needs to be done for all instances where the change has occurred.
Hope this helps with your problem you’re having.
Are you certain that the build level has not also changed at the same time as the host? If so then your new instance may be out of synch with the request model of the scripts built using an earlier build. Developers have a habit of including items below the scenes that do affect the site visually but change the structure of the requests. The error you are receiving is common when you attempt to continue a conversation on a dead connection resulting from a missed dynamic session component which may have been added in the last build.
When in doubt quickly record the second site and take a look at the differences in the requests, even to the point of using WinDiff (included in LoadRunner) for this purpose.

Wayfinder IncludeDocs parameter in Modx is breaking the snippet

I'm quite stuck on an unexpected problem. I'm trying to use Wayfinder to generate a sitemap for a project. The output of the navigation items is as expected, but I need to include a number of documents in addition to the primary navigation elements.
To do this, I have used the includeDocs parameter.
[[Wayfinder? &startId=`0` &includeDocs=`17,18,19,20`]]
When I do this, I get no output at all. Remove includeDocs and I get the standard nav (expected). Use the param and the output is completely empty.
No idea what I'm doing wrong or what (if any) other setting must be defined in order to make this work.
The includeDocs parameter is very misleading. It should rather be named "onlyIncudeDocs" or "restrictTo", since that is what it does. It also requires the docs you include to be directly accessable from your startId, alternatively have the entire path "included".
I would suggest you create weblink resources directly under your startId, and link them to the resources you want to include. That way wayfinder will pick them up by default. (Note that you may need to handle this in your rowTpl for wayfinder, since a weblink stores the actual link in it's content field)
If you also want to include the children of the id's you specify, you would probably be better of slightly revising your resource structure.

Custom query string in domino URL

Need a way to pass a value between to pages using URL query strings if possible. However everytime I add "?customquery=customvalue" at the end it ends up to the 404 page of the website.
I want to basically make it look like this.
https://example.com/somedepartment/sample.nsf/page/hello+world?customquery=customvalue
hello+world is a document that is equivalent to a webpage.
I tried this plus a javascript that collects the strings after the number sign and it works.
https://example.com/somedepartment/sample.nsf/page/hello+world#customvalue
However, I couldn't use the hash sign because they told me not to use it and use another unique symbol instead. I am not aware of any symbols that could work the same with hash sign. If there is, please enlighten me.
Apparently, I was able to find an answer.
https://example.com/somedepartment/sample.nsf/page/hello+world?OpenDocument&RandomParam=sample
Now I could pass values by means of this format. Basically it has to be preceded by "OpenDocument" parameter before putting custom ones.
This documentation also helps: http://www.ibm.com/developerworks/lotus/library/ls-Domino_URL_cheat_sheet/

Searching Single Pages with Dynamic Content

I have a slight problem I have been trying to address for a client I have been working with. We have 4 sets of single pages that are loading content from a database using PHP based upon a get string that is provided. These pages that are generated are optimized well for SEO and have alt tags for images and Content that we need to be able to search using a search feature.
Now i had assumed (An everyone knows what assuming gets you) that these pages by default would be able to be searched by the concrete 5 built in search feature. But it doesn't work. If I search for a word that I know is definitely on one of these pages even multiple times no results are found.
How can I make Concrete5 search these pages. If its no do able by a default or by a plugin, then can someone please offer some advice on how to fix this. This is an important feature and must be completed.
EDIT: See my comment below. I still need some help or direction here as CSE inst much of an option.
EDIT2: It may be viable for me to install a crawler and a custom search engine to address my problems. I was thinking of spider. Any other suggestions on that or other options are much appreciated!
Unfortunately C5 doesn't provide a way to do this -- the only way to tap into the search index is with blocks. And even if you created a phony block just to pass content from the single_page through to the search index, there's no way to say that some content is from one URL while other content is from another URL (which you'd need to do since your single_page controller is handling many different URL's).
I don't know of a way to achieve what you want to do (and it appears that nobody else does either -- http://www.concrete5.org/community/forums/customizing_c5/make-content-in-single-pages-searchable/ ), other than building your own internal search engine.
EDIT: I just did some digging, and thought that perhaps you could manually insert records into the PageSearchIndex table and specify the searchable content and the desired path there -- but this won't work because it relies on one cID (collection id, a.k.a. page id) per entry -- so you'd only be able to insert one record for the top-level single_page path.
I think the simplest solution here would be to create your own searching infrastructure for your single_pages (like some kind of function in the controller that would return an array of page paths and searchable content for each one), then override the search block and perform an additional search of your single_page -- then combine the results on the search results page there. Or just use google site search for your site, which will actually crawl the pages and hence find your various single_page urls: https://www.google.com/cse/
Best of luck.
I have not tested this, but maybe you can put a function getSearchableContent() in the single pages controller like you do for blocks. This would return the string to be searched. Would look something like this:
function getSearchableContent() {
// ... compose searchstring depending on the queried content.
return $searchstring;
}
But I don't know if this works for dynamic content. If not, I'd look into C5's search index core classes and try to extend them for your project.

Friendly URLs when using a Record ID for dynamic content

I've read a bit on the matter of friendly urls and I'm a little unsure as to what is better.
I currently have my website using a structure of http://www.domain.com/page.php?id=2
I am using the record id to determine the content of the page. My record id's are numeric and increment for new pages added. The content of existing pages can change completely over time. But, still use the same record id (this is a cms so the client may do this).
The way I understand it I have two options for friendly urls:
http://www.domain.com/page/2
http://www.domain.com/some-text-describing-the-page
Now because I identify the content by the record id, I would assume the first option would make more sense.
My client seems to want option two.
After some reading I found two conflicting points.
As per Tim Berners-Lee (the architect of the WWW) he states that you want a URI which will have the potential to remain the same 2 months, 2 years, 200 years from now. So you DO NOT want to use a page title or something similar for your pages. If you change your pages content you are either forced to change the content and leave the URI alone, or change the URI and are stuck with dangling links. You can read his article here (http://www.w3.org/Provider/Style/URI)
However, a number of other people on the internet (with no know authority to me) clearly state that you need to have a descriptive yet short URI for the best SEO value. From what I read, mostly for the purpose of backlinks and having keywords in the anchor text since people just use the link itself for the anchor text. So having keywords in the link itself helps search engines know what the link is about without a custom title.
It seems to me the difference has to do with long term VS short term.
Am I grasping this correctly?
If I am to use a slug style URI as defined by the user, do I have to just allow my user to type in whatever they want to a field and check against the current database to see if it exist? If so, am I supposed to anticipate static links by running a query for the know record id and then use the result to generate the url which would just be rewritten back to the format: http://www.domain.com/page.php?id=2?
It seems to me that would be a lot of extra overhead.
I would suggest something in the middle of those two:
http://www.domain.com/page/2/some-text-describing-the-page
or without page:
http://www.domain.com/2/some-text-describing-the-page
You can still get page Id from the Url, and there is a title as well! And what even more important, you're still able to get correct content, even when page title change later.
So think about situation like that: User creates a page, it receives Id=4 and it's title is My great title. From that information Url is generated, and is e.g. http://www.domain.com/page/3/my-great-title. After 2 months user changes the title to This title is better then the last one!. Url changes as well to http://www.domain.com/page/3/this-title-is-better-then-the-last-one. However, there is still 3 within the Url, so you're able to show right content! You can also check, if the rest of Url is actual, and redirect (301 would be the best one) to new one to let search engines know, that Url changed.

Resources