As the title says,
lets say I want to get the number of .de domains:
Googling:
inurl:www.*.de
retrieves the correct results but a lot of them are from the same domain.
Is there another way to do this?
the better search query would be: site:de
but even so, the result count of goolge is just a very very very blurry page estimate (a.k.a. completely wrong and not what you are looking for).
google is the wrong source for this.
but via google i found this
http://www.denic.de/hintergrund/geschichte-der-denic-eg.html
August 2009 13 Millionen Domains
unterhalb von .de registriert –
darunter 463.000 IDNs.
Related
I'm trying to do a wildcard search on Wikipedia but the search is not behaving the way the instructions say it should. Here's the advanced search help page:
https://en.wikipedia.org/wiki/Help:Advanced_search
As an example, it says this regarding a Wildcard search:
the query *stan will match Kazakhstan or Afghanistan or Stan Kenton.
However, when I attempt to do that search (or even click on the embedded link to that search), I only get
the page *stan does not exist
and it just lists a bunch of "Stan" entries starting with "Stan Laurel filmography."
Why would this feature not work? Am I missing something?
It does work, however because direct matches for "stan" are scored higher than words with it, Kazakhstan is waaaay down in results. You can try slightly narrowing the results with intitle:*stan however this is still bad. However, a quick check with k*stan shows that it works.
Conclusion: user-written help page has a bad example.
My question is not about parsing.
I have been looking through the wikipedia API. I need to search for companies and get a one sentence summary. It's working good, the only problem I have is when I need to disambiguate. It's hard for my code to know whether "dropbox (service)" or "dropbox (band)" is the dropbox company my user is looking for.
I tried to put the word "company" in the query, expecting it to work like a google search, but unfortunately it didn't.
so my question is: is there an easy way to disambiguate the results I get by telling wikipedia it is a "company" that I want?
If you're looking for companies only then consider using their full names instead of short forms. In case of Dropbox, the name of the company is Dropbox, Inc. If you search for Dropbox, Inc in Wikipedia you will be redirected to the page Dropbox(Service) which i believe is the page youre looking for.
If you dont have the resources to have the name of the company in the perfect format, then consider using Category:Companies to refine your results further.
When you get to the page, you can mine for the extract of the company by using the Mediawiki API as follows
https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=Dropbox%20(service)
Note: The extract is called section0 in MediaWiki
I recommend trying Wikidata. Wikidata are a multilingual factual database of everything, and they have a query interface at query.wikidata.org. The language the interface uses is called SPARQL. For instance, if you're interested in a list of well-known cats, https://w.wiki/W4W is your query. More details can be found at https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service.
import wikipedia
print(wikipedia.summary("COMPANY_NAME"))
Try to filter out the companies by categories - there is a list provided in the end of the page:
xx = wikipedia.page("Dropbox")
xx.title
print(xx.categories)
I have a group of websites I want to check daily for new content and I'm not sure what the best way is. I'm hoping one of you can help me.
With Google Custom Search, I can search a group of websites -- but what I want is to find any content posted in the past 24 hours, not just content related to a specific keyword. I've tried searching with no keyword and I get no results.
With regular Google Search, I can choose a single site (site:www.example.com), use search tools to limit the results to the past 24 hours, enter no keyword and find anything that's new. But that only works for one site at a time, as far as I can tell.
With Google News search, I can find new content from multiple sites -- but that only works for news sources. If I enter nytimes.com, it works; if I enter dcenr.gov.ie/ I get nothing.
Any ideas on another way to approach this?
You can try creating a RSS feed for the webpages and then using a RSS reader to check for updates.
As for one of my friends request i had to build a website for 3 countries in 3 sub-domains.
Like
au.example.com
us.example.com
in.example.com
All these three has common contents and some unique contents.All the traffic comes from a particular country will redirect to the related subdomain.
My problem is the google indexing. As all the traffic comes from USA will directed to us.example.com,google bots will index only us.example sub-domain.But there are lot of other contents at in.example.com. so how can I let the google to index my other 2 subdomains?
Thanks for your advices
How are you doing your location-specific sorting?
What you'll have to do is add an exception based on something, probably based on the user-agent.
However, Google has searches for all languages, so you really shouldn't worry about it. Their version of search for AU will crawl AU, etc. If you want to allow Google to index AU for its US search... that might get you in a bit of trouble with Google (and honestly, would defeat the purpose of what you are trying to achieve).
I have a list of URLs and am trying to collect their "descriptions." By description I mean what comes up, for example, if you Googled the link. For example, http://stackoverflow.com">Google: http://stackoverflow.com shows the description as
A language-independent collaboratively
edited question and answer site for
programmers. Questions and answers
displayed by user votes and tags.
This the data I'm trying to accumulate for the URLs I have.
I tried parsing the URL's meta-descriptions, however most of them are lacking a meta-description (yet Google and other search engines manage to get a description somehow).
Any ideas? Should I just "google" each link and scrape the data? I have a feeling Google wouldn't like this...
Thanks guys.
Different search engines have different algorithms to get the description out of the page if/when they are lacking the description meta tag. Some ignore the tag even it it's there.
If you want the description Google has, the most accurate way to get it would be to scrape it. Otherwise, you could write your own or look around on the web for code that does it.
These are called snippets.
Google use proprietary (and possibly patented) methods to garner this information, so there is no simple answer.
As you suggest, they will use meta-description information if it is there. (How to set the meta-information to help Google.)
They will also honour requests from the page authors to NOT include snippets. (How to prevent Google from displaying snippets) You should probably respect this too (as well as robots.txt, of course.)
You may have some luck with existing auto-summary packages, such as OTS.
You may want to check AboutUs.org (i.e. http://www.aboutus.org/StackOverflow.com).
But, there's little chance that the site will have an aboutus page and not have a meta description.
Some info that might explain how google does this:
Webmasters/Site owners Help
Adding a URL to google
I am not familiar with Google APIs, but perhaps there is an official way to get such information.
Interesting. some sources are better than others.
For "audiotuts.com" google has a worse description than AboutUs.com.
Google
Nov 18th in General by Joel Falconer ·
1. Recently, an AUDIOTUTS reader asked me about creative process. While this
is a topic that can’t be made into a
...
AboutUs.com:
AUDIOTUTS is a blog/tutorial site for
musicians, producers and audio
junkies! It is the sister site of the
popular PSDTUTS, VECTORTUTS and
NETTUTS.
I hate problems like these... they should be trivial but they aren't!
If you can assume English content, you can first look for Meta Description, and if that doesn't work, you can look for the first two or three sentence-like word sequences.
A product I worked on looked for the first P or DIV that contained more than one sequence of > n "words" delimited by periods. It would use the two or three sentence-like sequences, up to x total words, as a summary paragraph. It wasn't 100% accurate, but good enough for the average case. The number of words was adjusted a few times to eliminate things like navigation elements.