I would like to show content from other websites using RSS Feeds into my drupal site and show it inside a page.
I searched for RSS. There are so many. Can you suggest a good one?
I'm trying to show the content from here
http://feeds.feedburner.com/brazen_careerist
Update: i tried installing the feeds module and i gave this url
http://feeds.feedburner.com/brazen_careerist?format=xml
it imported all the items. How do i auto-import only the latest ones and auto-format it and make it available on my site whenever it's available from the source
Thanks a lot
-Vivek
To have it setup to auto import you have to play around with the "Importers basic settings". Checkout the handbook page for Feeds here
Go to "Basic settings". Decide whether
the importer should be used on a
standalone form or by creating a node
("Attached to content type"); decide
whether the importer should
periodically refresh the feed and in
what time interval it should do that
("Minimum refresh period").
I'm not quite sure what you mean by auto-format but I think you might want to look at field mapping (information is on the handbook page as well).
Hope it helps.
Related
I'm looking to use Redmine for document management. I know that Redmine is not ideal for this task but there is already a lot of content on the site so I'd like to utilize it if possible.
Redmine currently does not a have great documents module. The files we've uploaded look to be amended on that specific page and it doesn't seem to be able to move to another page (unless you download and re-upload to the proper page).
Idea 1
I see there is a Files section, which could work as a central repository (and you can upload document based on release) however, is there a way to set up a nice-looking 'front-end' page that automatically updates based on new submissions to the Files tab? I envision this front end to be a simple wiki page with the document name, a short description and a links to the file posted in the Files tab.
There are so many documents uploaded to varying pages on the Redmine site. I would only do the whole download and re-upload of files if there was a way to automatically update the 'front end' wiki.
Idea 2
I see there is a DMSF plugin for Redmine. Has anyone used this before and has is solved document management issues? I'd like to hear your feedback. Even if DMSF doesn't totally solve my issue, anything is better than what I have now.
Thanks!
In my opinion DMSF module is a perfect companion for Redmine. We have adopted it in our company. You can easily deal with document versions, webdav access, custom approval workflow, document modifications notification with the extra value of being well integrated with Redmine features (roles, dynamic links in Wiki and issue text and notes).
Is it possible to get newfeed of the certain section of the webpage? e.g. i want to get the newfeed of this page
http://gulfnews.com/business/property
but the newsfeed i get is of the whole page. Any help?
Is there any tool where i can control which posts to post and which to ignore. I am using http://www.rssgraffiti.com/ but it doesnt have a controlled option in the free package.
If there content management system supported it, you could. But apparently there content management system doesn't support that.
I am currently involved in project where we are using Liferay (6.1 GA2).
It seems that Liferay search results provide links to Web Content Fragments instead of to the pages containing them.
Have any of you gone through this issue? Do you know how to solve it?
Thanks a lot pals.
Best, Alberto
You can have a lot more content in the backend than actually displayed on any page. Further, you can display any article on multiple pages at once.
A way to work around this is to specify in the "Web Content Search" portlet that you're only interested in content that is actually published. However, this does not solve your second problem: The content can still be published on many different pages.
Every content can have a "Display Page" - the setup of such a display page is well explained in the UI (see the Web Content Editor) so that you'll actually see a proper page with the search results.
If you actually want to search for pages only instead of content (you might miss out on some metadata), I'd recommend to go with some spider solution that spiders your website, indexes the pages independent of their construction elements (articles) and search that external index.
If I had to create a content inventory for a website that doesn't have a sitemap, and I do not have access to modify the website, but the site is very large. How can I build a sitemap out of that website without having to browse it entirely ?
I tried with Visio's sitemap builder, but it fails great time.
Let's say for example: I Want to create a sitemap of Stackoverflow.
Do you guys know a software to build it ?
You would have to browse it entirely to search every page for unique links within the site and then put them in an index.
Also for each unique link you find within the site you then need to visit that page and search for more unique links.
You would use a tool such as HtmlAgilityPack to easily grab urls and extract links from them.
I have written an article which touches on the extracting links part of the problem:
http://runtingsproper.blogspot.com/2009/11/easily-extracting-links-from-snippet-of.html
I would register all your pages in a Database, and then just output them all on a page (php - sql). Maybe even indexing software could help you! First of all, just make sure all your pages are linked up and submit it to google still!
Just googled and found this one.
http://www.xml-sitemaps.com/
Looks pretty interesting!
There is a pretty big collection of XML Sitemaps generators (assuming that's what you want to generate -- not a HTML sitemap page or something else?) at http://code.google.com/p/sitemap-generators/wiki/SitemapGenerators
In general, for any larger site, the best solution is really to grab the information directly from the source, for example from the database that powers the site. By doing that you can get the most accurate and up-to-date Sitemap file. If you have to crawl the site to get the URLs for a Sitemap file, it will take quite some time for a larger site and it will load the server during that time (it's like someone visiting all pages in your site). Crawling the site from time to time to determine if there are crawlability issues (such as endless calendars, content hidden through forms, etc) is a good idea, but if you can, it's generally better to get the URLs for the Sitemap file directly.
I'm looking at using a Windows SharePoint Services 3.0 wiki as a metadata repository. We basically want a community-driven dictionary and for various reasons we're using Sharepoint instead of say MediaWiki.
What can I do to customize or completely replace searchresults.aspx?
Features I'd add if I knew how:
Automatically load the #1 hit if it is a 100% match to the search term
Show the first few lines of each result as a preview so users don't have to click through to bad results
Add a "Page doesn't exist, click here to create it" link in cases where there's not a 100% match
I've got Sharepoint Designer installed and it looks like I'll be able to use it to upload any custom .aspx files I create but I don't see that it will give me access to searchresults.aspx.
Note: Since I plan to access this search tool from an external site via URL parameters it should be fine to leave the existing searchresults.aspx unchanged and just load this solution as a complementary search option.
Yes, everything is possible but you will need to customize it a little bit.
I would recommend you to build a custom web part to display your results. Here is a nice article to start with: http://msdn.microsoft.com/en-us/library/ms584220.aspx