I am trying to build a related posts functionality like what we see here in stackoverflow on the bottom right corner. My main difficulty is that the related posts have to be determined at runtime when loading a particular post.
I am thinking to look for posts with similar or equal set of tags, also similar titles and possibly similar sets of keywords in the main post content, the problem is that the more signals you look for the slower the database will be to return data.
I have also thought about using Google Site Search (we have an account) as Google does understand the relationship between posts very well, but unfortunately the related: parameter is broken according to Google.
I am looking for ideas on how to best achieve this. Has any of you ever done something like this? What's the best way to achieve this type of thing?
Related
I have a page with categories and page with goods for each category. How should I divide my api endpoints?
is that correct to use something like this:
'/api/categories'
'/api/categories/:categoryId'
'/api/categories/:categoryId/goods'
'/api/categories/:categoryId/goods/:itemId'
or should I use structure like:
'/api/categories'
'/api/categories/:categoryId'
'/api/goods'
'/api/goods/:itemId'
Both answers are correct. There is usually no clear answer to questions like that. You can use one style of URLs, you can use the other style, or you can even use both with either the same resources available with two URL or with redirects.
The style of /api/categories/:categoryId/goods/:itemId has an advantage of nice hierarchy and more information in the URL itself.
But the style of /api/goods/:itemId has an advantage of being able to move goods between categories more easily, or even supporting multiple categories per item or adding tags in the future etc.
This is really something that you have to decide yourself, not only basing your decision on what URL seems better in itself but also considering your particular usage of the API that you're developing - like e.g. how likely it is to change categories, or how likely it is to access the goods without knowing the categories.
I recommend reading the tutorials about API design that were published by Stormpath. See:
https://stormpath.com/blog/fundamentals-rest-api-design
I particularly recommend watching the talks by Les Hazlewood that you can find on the Stormpath website, because he explains a lot of subtle topics like that.
I'm trying to understand how to build a page that retrives all the images from instagram, that used a specific tag, whit a minimum number of X likes. Current tool on the web doesn't filter for likes number.
I found things like instafeed.js but it seems that it's impossible to use them at the moment because of the new instagram api limits.
I think that it should be quite easy to do this, but I don't know how to proceed :/
It should be pretty straightforward but you have no code to go off in your question. What programming language are you using? You would ideally use their API for this. I'm not sure about limits though, which are you referring to?
Take a look at their docs
https://www.instagram.com/developer/
I came across this site called social mention and am curious about how applications like this work, hopefully somebody can offer some glimpses/suggestions on this.
Upon looking at the search results, I realize that they grab results from facebook, twitter, google.... I suppose this is done on the fly, probably through some REST api exposed by the mentioned?
If what I mention in point 1 is probably true, does that means sentiment analysis on the documents/links return is done on the fly too? Wouldn't that be too computationally intensive? I am curious because other than sentiments, they also return the top keywords in the document set.
They have something called the "trends". They looked like the trendingtopics in twitter, but seems like they also include phrases >3 words long. Is this relevant to nlp's entity extraction or more to keyphrase extraction? Is there apis other than that of Twitter that provides this? Is "trends" generally done on search queries submitted by users or do the system actually processes the pages?
A curious man.
sentiment can be fast and on the fly, if it is for example rule-based and the dictionaries are in memory. Curious? Get in touch
I'm creating a Book store in Magento and am having trouble figuring out the best way to handle the Authors of a Book (which would be the product).
What I currently have is an Attribute called "authors" which is multi-select and a thousand [test] values. It's still manageable but does get a little slow when editing a product. Also, when adding an option/value to the authors attribute itself, a huge list is rendered in the HTML making this an inefficient solution.
Is there another approach I should take?
Is it possible to create an Author object (entity type?) which is associated to a product through a join table? If yes, can someone give me an explanation about how that is done or point me to some good documentation?
If I'd take the Author object approach, could that still be used in the layered navigation?
How would I show the list of all books for a single author?
Thanks in advance!
PS: I am aware of extensions like Improved Navigation but AFAIK it adds something like attributes to attributes themselves which is not what I'm looking for.
For Googlers: The same would apply for Artists of a music site or manufacturers.
If you create an author entity type, you'll just increase your work trying to add it to layered navigation, and I don't see a reason why it would be faster.
Your approach seems the best fit to the problem, given the way Magento is set up. How are you going to display 1,000 (which presumably pales in comparison to the actual list) authors in layered navigation?
Depending on the requirements, you could go the route of denormalizing the field and accepting text for it. That would still allow you to display it, search based on it, etc, but would eliminate the need to render every possible artist to manipulate the list. You could add a little code around selecting the proper artist (basically add an AJAX autocomplete to the backend field) to minimize typos as well.
Alternatively, you could write a simple utility to add a new artist to the system without some of the overhead of Magento's loading the list. To be honest, though, it seems that the lag that this has the potential to create on the frontend will probably outweigh the backend trouble.
Hope that helps!
Thanks,
Joe
I was wondering if a blogs posts comments should be paginated or not and why? I did not know where to ask this question so I asked it here.
I like the way the NY Times paginates their blogs' comments (see e.g. http://freakonomics.blogs.nytimes.com/2010/09/15/the-magic-that-is-ted/). There are 25 comments to a page, and the pagination links are small and visually subtle so as not to overbear.
I think that it depends on the blog. Think about what the readers would prefer... if pagination makes comments more useful to them, then put in pagination.
Pagination is not the only way — and probably not the best way — to present blog comments. I prefer to see a small selection of the best comments rather than pages and pages of comments. I think that The Daily WTF does this very well: the author manually picks out two or three really good comments to appear on the main page, and the rest get pushed to a separate comments page.
For example: http://thedailywtf.com/Articles/Is-Your-PC-Frozen.aspx