CONCLUSIONS
In this article we studied Web page topic classification from URL. Even though contentbased topic classifiers gave better results than URL-based ones, topic classification from URL is preferable when the content is not available, or when classification speed has the highest importance.
We can summarize our main findings for URL-based Web page topic classification as follows.
- (1) We showed that dictionary-based baseline algorithms are not enough for highperformance URL-based topic classification. For the dictionary-based methods even the best-performing variant using all-grams (a combination of 4-, 5-, 6-, 7-, and 8-grams) achieved a macroaveraged F-measure of only 73. On the other hand for topic classifiers where precision is important and some recall can be sacrificed, token-based statistical dictionaries can be used as they achieved a macroaveraged precision of 92 with a macroaveraged recall of 34 on the ODP dataset.
- (2) We showed that the features have more impact on the classifier performance than the classification algorithms. Allgrams derived from URLs was the best feature set, considerably better than tokens.
Explicitly encoding positional information for all-grams derived from tokens performed slightly better than using only such all-grams without positional information, but still slightly worse than using all-grams from the URL. Given the same feature set, the ME and SVM algorithms showed the same performance, and the other algorithms also performed similarly. - (3) We obtained a macroaveraged F-measure of 83 when ME
algorithm is used with all-grams derived directly from URLs. - (4) We reported a performance which improves the best previously reported URL-only performance for a small dataset of university pages by 13 points in F-measure.
- (5) Using summaries of Web pages for training and testing led to a large improvement over using only URLs, with a macroaveraged F-measure of 94. On the other hand the performance of URLbased topic classification decreased when the summaries of Web pages are used in
training phase in addition to URLs. The reason for this is the vocabulary difference between URLs and the summaries of Web pages. - (6) We achieved an additional small improvement with using inlink information.
- (7) Applying boosting to combine different algorithms gave a small performance improvement of 1 or 2 points in F-measure.
- (8) The challenges for URL-based topic classification are:
- (i) data consistency as the definitions of topics differ from one dataset to another,
- (ii) overlap between different
topics in one dataset, - (iii) empty URLs consisting of only stop tokens or previously
unseen tokens.
Dictionaries and String Matching
- We formed four types of dictionaries which contain a list of representative tokens or all-grams derived from URLs for topics. In the first type of dictionary, Tokens byHand, we used all words from the first two levels of the ODP hierarchy. For example, the terms “Basketball” and “Football” which are listed one level below “Sports” in ODP hierarchy, are added to the sports dictionary. Some terms, such as “Online” listed
under “Games,” are not put into the games dictionary if they appeared nontopic-specific to a human. The average dictionary size was 19.8 words per topic.
- We formed four types of dictionaries which contain a list of representative tokens or all-grams derived from URLs for topics. In the first type of dictionary, Tokens byHand, we used all words from the first two levels of the ODP hierarchy. For example, the terms “Basketball” and “Football” which are listed one level below “Sports” in ODP hierarchy, are added to the sports dictionary. Some terms, such as “Online” listed
- For the second type of dictionary Tokens by-Statistics we formed a list of tokens which have length greater than 2, by first merging the “set-of-tokens” from URLs of each Web page listed under ODP for each topic. Set-of-tokens is simply the list of tokens seen in the URL.
Then we obtained representative tokens for a particular topic, by looking at the percentage of “set-of-tokens” containing this token, both for the topic itself and for the other topics.
A token was added to the dictionary corresponding to the topic if:
(i) it appeared in at least five “set-of-tokens” of the topic, (ii) it had a precision of at least 80 on these “set-of-tokens”, and (iii) it had a recall of at least .01.
The average dictionary size for this approach was 987 words. We decided on these rules after manually inspecting the list of tokens and we found tokens with a recall above 0.01% and a precision of 80% or more to be tokens that humans might choose as “topically relevant”. In Tokens by-Statistics dictionary “basketball” and “mensbasketball” are some example tokens for Sports topic.
- The third type of token dictionary is referred to as Domains by-Statistics. For this the list of typical domains for each topic is formed in the same way as representative tokens are formed for dictionary Tokens by-Statistics. The difference is now “set-of-tokens” contains only domains of URLs. With the Domains by-Statistics dictionary a Web page is classified as “Sports” if and only if it comes from one of the typical sports domains. In Domains by-Statistics dictionary “football.co.uk” and “sportsnetwork.com” are some example domains for Sports topic.
- Finally, we also trained a dictionary by using all-grams instead of tokens named All-grams by-Statistics, which resulted in an average dictionary size of 13k n-grams. In All-grams by-Statistics dictionary “sports” and “sportspa” are some example grams for Sports topic.
Table shows the performance of the baseline algorithms when the tokens (or allgrams) from the dictionary are checked with token match strategy with the tokens (or all-grams) derived from the test URLs. For all the topics, all the token dictionaries gave fairly high precision values but low recall values. The Tokens by-Hand dictionary gave the lowest recall values as their vocabulary for each topic is limited to the two first levels of ODP hierarchy. The Domains by-Statistics dictionary seems to give higher precision values than the other dictionaries. However, it has a macroaveraged recall of 22. This shows that domains are indeed good indicators but using only domain information will not be enough to achieve an adequate level of recall. When we compare the performances of all dictionaries we see that the statistical dictionaries give higher performances. We had two types of statistical dictionaries, Tokens byStatistics and All-grams by-Statistics. The Tokens by-Statistics dictionary achieved a higher precision but a much lower recall than the All-grams by-Statistics dictionary. On the ODP dataset All-grams by-Statistics gives the highest performance in terms of F-measure with a macroaveraged F-measure of 73.
For the first two token-based dictionaries we also experimented with using
substring matches, rather than token matches on the ODP dataset. This increases recall as now the “volleyball” in http://www.attackvolleyballclub.net/ is also detected. The macroaverages in this case are P = 84, R = 15, F = 24 for the Tokens by-Hand dictionary and P = 70, R = 63 and F = 62 for the Tokens by-Statistics dictionary.
From these results we can conclude that baseline algorithms are not enough for a good performance. However, results for baseline algorithms give us an insight about which topics are easier to classify. For example “Adult”, “Sports”, and “Games” seem to have higher F-measure values than the macroaverage for most of the dictionaries both in cases of token match and substring match. In practice dictionary-based baselines using tokens might be of interest if high precision but not necessarily high recall is required. For example, a topic-focused crawler might want to detect some surely relevant URLs early during a crawl and then use these pages to “bootstrap”, for example, using the link information.
网友评论