We are looking to redirect some query pages that are dynamically created based on a user search query, to subject pages (only if the query exactly matches a subject that we have in a list. This list contains +1000 subjects.).
Example:
$subjects = array("Marketing", "History", "Management", "Chemistry");
https://example.com/courses/?search=marketing should redirect to https://example.com/courses/subjects/marketing/ given that it exists in our array of subjects.
https://example.com/courses/?search=email+marketing should not redirect, given that it does not exist in our subjects array.
We are looking to perform these redirects on the server side using the htaccess file. Any suggestions on how to best execute on this?
Related
I want to create a dynamic subdomain for each category.
E.g. my side is www.bbq.com and when I select the xyz category it will redirect to xyz.bbq.com and when abc it will redirect to abc.bbq.com and so on.
I am using Magento 2 x, PHP 7, MySQL 6
For the above dynamic subdomain, I have created different store on one website. now i am redirecting properly to perticular category.
if suppose, i am redirecting to abc category i.e abc.bbq.com i want to show only those product who belonging to abc category i.e product is sort by category.
In what the way above things should be achieve?
Check my post for create the subdomains dynamically for your categories in Create subdomains on the fly with .htaccess (PHP) . Post title is Wildcard subdomain creation methods
How can I get the formatted url from Sitecore Lucene search? I created a custom index and updated it with under root as /sitecore/content/websitename/home.
When the search results are retrieved the URL is appended with https://hostname/websitename/home/sample.aspx. I would like the url to be https://hostname/sample.aspx. Is there any setting in index config that needs to be updated?
In sites.config I already have rootPath="/sitecore/content/websitename" startItem="/home"
You can get the url in two ways:
For each result from your index, fetch the item and get the url with the LinkManager as you normally would for any item. This does mean that you need to fetch the items what will be a performance hit.
Create a computed field in your index to include the url. In your computed field, make sure the correct link is being generated. If not, you might need to check your url options and the maybe the Rendering.SiteResolving setting (true). Verify the results with a debugger (or with Luke to test the index). Remember that if you include the url in the index, you will need to update additional items when an item is renamed (or even the display name changed when that is used in the url). All children of that changed item had their urls changed as well at that point.
I know that url_rewrite using .htaccess requires an identifier in the pretty url by which we identify the page/link to load. But, here are a few examples where i can't make out the identifier.
Any ideas how do they do it?
http://techcrunch.com/2014/03/15/julie-ann-horvath-describes-sexism-and-intimidation-behind-her-github-exit/
http://techcrunch.com/2014/03/15/why-we-hate-google-glass-and-all-new-tech/
In both the examples above, the portion http://techcrunch.com/2014/03/15/ is constant. Any ideas on how to do this would be welcome.
There's a lookup based on the "category" and "page name". It uses both "2014/03/15", or the date, as well as the name of the post, "julie-ann-horvath-describes-sexism-and-intimidation-behind-her-github-exit" to fetch the dynamic content. This makes it so you wouldn't really need an ID unless you happen to have 2 posts with the exact same title on the same date. The fetch from the database is a little more complicated with this method, since the title in the URL isn't always going to be the title in the database because the title text needs to be cleaned of special characters and spaces so that it reads nicely within a URL. For example:
/whats-with-all-of-these-titles-in-urls/
Could have a page title: "What's with all of these titles in URLs"
So you can see the ' is removed, the spaces are changed to -'s and everything is made lowercase. CMS's handle this by creating what's called a "slug". The "whats-with-all-of-these-titles-in-urls" title is the "slug" while the real title is "What's with all of these titles in URLs". The slug is stored alongside the title in the database, and is ensured to be unique, at least within each category. This way, the slug is sort of like a numerical ID and is used, along with the category (but not necessarily), to fetch the page content from the database.
When you like a page on Facebook the link to the page now appears on the users news feed with a query string appended to the end e.g
?fb_action_ids=102187656601495&fb_action_types=og.likes&fb_source=aggregation&fb_aggregation_id=46965925417
This 404s on every single page on my site.
I can change the facebook 'data-href' parameter on all my pages so the link will work by appending my own query string (?opendocument - the site is in lotus domino)
This works but means losing all the likes already in place on the page and effectively returning to zero likes for all the pages.
Is there anyway I can stop facebook appending the query string? If not can I update the 'data-href' paramater on my pages but still maintain the like count already in place.
Many Thanks in advance for any guidance here
Pete
I have added a Web Address rule to a Search Scope and set the Folder url to the following for searching through a single list in site collection :-
http://svrmosstest3/sites/asmtportal/Lists/SearchList
And added this scope to the search dropdown, this search is working fine and will return results from that list only, but it returns one extra item which is an entry for the list itself which is :-
http://svrmosstest3/sites/asmtportal/Lists/SearchList/AllItems.aspx
because this will always fall under the Rule URL.
Is there any other method to create a search scope that will search only through the items of a Single Sharepoint List in a site collection ??
Please tell me ,.. if there are any sharepoint experts ??
You will most likely need to add a crawl rule to exclude that one page from being added to the index (which will then prevent it from being included in your search scope).