Monday, December 31, 2012

Howto: build a news aggregator (in 100 LOC)


In this post, I'll cover how to build a crude, simple but working news aggregator in about 100 lines of python (source code). While a complete news aggregator will comprise of at least three steps:
  • filter: select a subset of stories of interest, i.e. the top news -- e.g., that Zynga's shuttering of several games will be one of the most interesting to Techmeme's readership in the screenshot below
  • aggregate: cluster like documents --- that Techcrunch and GamesIndustry International are both covering the same story
  • choose: choose the best representative story -- that the Techcrunch story should be highlighted over the GamesIndustry International's story,
in this article, we will focus on the second step only, aggregation of similar articles. 


Let us state the goal explicitly: given multiple sources of documents, such as RSS news feeds, cluster together similar documents that cover the same material. For instance, both Reuters and Associated Press might cover Hillary Clinton's admission to hospital (which was a major story at time of writing). In the google news screen shot below we can see that USA Today, Detroit Free Press, Boston Herald and at least 10 other news organizations covered that story. We wish to recognize that these are the same underlying story and cluster or aggregate them.


My approach was the following:
  • parse RSS feeds
  • extract keywords using TF-IDF (i.e., term frequency * inverse document frequency)
  • union those to a set of features that encompasses the whole corpus
  • create a feature vector for each document
  • compute matrix of cosine similarities
  • run hierarchical clustering and plot dendrogram
  • extract final clusters of interest
I'm not claiming this is an optimal scheme or will necessarily scale well and I did not focus on vectorized operations. It does, however, produce non-embarassing results in 100 lines of code. Later, we will discuss other approaches including the algorithm versus human debate.

Let's build an aggregator. To start, we will need some data sources. I chose a handful of RSS feeds about technology and IT news:

feeds = [
    'http://www.sfgate.com/rss/feed/Tech-News-449.php',
    'http://feeds.feedburner.com/TechCrunch/startups',
    'http://news.cnet.com/8300-1001_3-92.xml',
    'http://www.zdnet.com/news/rss.xml',
    'http://www.computerweekly.com/rss/Latest-IT-news.xml',
    'http://feeds.reuters.com/reuters/technologyNews',
    'http://www.tweaktown.com/news-feed/'
]

(We can assume that these source have already mostly accomplished the first news aggregator step: filtering.)

Next, we need to parse out the source. I decided to use the document title and description which is a shortish summary. I used the feedparser python library and the natural language processing library NLTK. For each document, I concatenate the title and description, convert to lowercase, and remove one character words. I store those final documents in a corpus and also store the titles to make the final results more interpretable:

import feedparser
import nltk
corpus = []
titles=[]
ct = -1
for feed in feeds:
    d = feedparser.parse(feed)
    for e in d['entries']:
       words = nltk.wordpunct_tokenize(nltk.clean_html(e['description']))
       words.extend(nltk.wordpunct_tokenize(e['title']))
       lowerwords=[x.lower() for x in words if len(x) > 1]
       ct += 1
       print ct, "TITLE",e['title']
       corpus.append(lowerwords)
       titles.append(e['title'])

Here is a sample of those data (titles only):

0 TITLE Tweeters say the dumbest things
1 TITLE IFixit thrives by breaking tech items down
2 TITLE Earthcam's New Year's Live 2013
3 TITLE Content Fleet offers publishers hot tips
...
96 TITLE Sony stops shipments of PS2 console in Japan
97 TITLE RIM's first patent payment to Nokia: $65 million
98 TITLE ZTE aiming for thinner 5-inch 1080p-capable smartphone, Grand S has grand dreams
99 TITLE Japanese security firm to take to the sky in 2014, will unleash crime-fighting drones

We will extract keywords using TF-IDF. I modified the following from this post:

import math
from operator import itemgetter
def freq(word, document): return document.count(word)
def wordCount(document): return len(document)
def numDocsContaining(word,documentList):
  count = 0
  for document in documentList:
    if freq(word,document) > 0:
      count += 1
  return count
def tf(word, document): return (freq(word,document) / float(wordCount(document)))
def idf(word, documentList): return math.log(len(documentList) / numDocsContaining(word,documentList))
def tfidf(word, document, documentList): return (tf(word,document) * idf(word,documentList))

I only want to extract a few top keywords from each document and I'll store the set of those keywords, across all documents, in key_word_list:

import operator
def top_keywords(n,doc,corpus):
    d = {}
    for word in set(doc):
        d[word] = tfidf(word,doc,corpus)
    sorted_d = sorted(d.iteritems(), key=operator.itemgetter(1))
    sorted_d.reverse()
    return [w[0] for w in sorted_d[:n]]   

key_word_list=set()
nkeywords=4
[[key_word_list.add(x) for x in top_keywords(nkeywords,doc,corpus)] for doc in corpus]
   
ct=-1
for doc in corpus:
   ct+=1
   print ct,"KEYWORDS"," ".join(top_keywords(nkeywords,doc,corpus))

Here is a sample of those data:

0 KEYWORDS she tweeted athletes south
1 KEYWORDS ifixit repair devincenzi items
2 KEYWORDS earthcam live famed sound
3 KEYWORDS fleet content tips publishers
...
96 KEYWORDS playstation sony console ouya
97 KEYWORDS payment nokia patent agreement
98 KEYWORDS zte grand boots ces
99 KEYWORDS drones soon 2014 drone

Now that we have this superset of keywords, we need to go through each document again and compute TF-IDF for each term. Thus, this will be likely be a sparse vector as most of the entries will be zero.

feature_vectors=[]
n=len(corpus)

for document in corpus:
    vec=[]
    [vec.append(tfidf(word, document, corpus) if word in document else 0) for word in key_word_list]
    feature_vectors.append(vec)

With a vector of TF-IDF for each document, we can compare the similarity with each other. I decided to use cosine similarity (and yes, this matrix is symmetric and thus the code is far from optimized):

import numpy
mat = numpy.empty((n, n))
for i in xrange(0,n):
    for j in xrange(0,n):
       mat[i][j] = nltk.cluster.util.cosine_distance(feature_vectors[i],feature_vectors[j])

We now have a single similarity number for each pair of documents.

The next step is clustering. Initially, the most obvious might be to use k-means or k-means++ clustering. It is simple, well-known and can work relatively well. As well as getting stuck on local optima, the major problem here, however, is that one has to choose k. This is not always easy. Moreover, I strongly suspect that the characteristics of news changes dramatically. That is, on presidential election day, almost all top stories will be about voting and speculation on the results. That is, there may be very few, very large clusters. On slow news days that story landscape may be very different. As such, I think it would be difficult to choose a constant k across days and the characteristics of the data may change so much that a scheme to choose k dynamically might be tricky. 

I chose a different approach: agglomerative or hierarchical clustering. Here clusters are grown by fusing neighboring documents to form a tree. The structure of that tree may change radically among different days but we can choose a similarity threshold to prune the tree to a set of final clusters. That similarity metric can be constant over days.

I used the hcluster python library and saved the dendrogram to file:

from hcluster import linkage, dendrogram
t = 0.8
Z = linkage(mat, 'single')
dendrogram(Z, color_threshold=t)

import pylab
pylab.savefig( "hcluster.png" ,dpi=800)



Finally, we need to chose a threshold from which to prune -- how similar is similar enough. 0.8 worked best for my data. There is then a little work to extract the clustered items from the dendrogram and print out the IDs and titles of the final clusters.

def extract_clusters(Z,threshold,n):
   clusters={}
   ct=n
   for row in Z:
      if row[2] < threshold:
          n1=int(row[0])
          n2=int(row[1])

          if n1 >= n:
             l1=clusters[n1] 
             del(clusters[n1]) 
          else:
             l1= [n1]
      
          if n2 >= n:
             l2=clusters[n2] 
             del(clusters[n2]) 
          else:
             l2= [n2]    
          l1.extend(l2)  
          clusters[ct] = l1
          ct += 1
      else:
          return clusters

clusters = extract_clusters(Z,t,n)

for key in clusters:
   print "=============================================" 
   for id in clusters[key]:
       print id,titles[id]

Here then, in combination with the dendrogram above, are my final results:

=============================================
35 Huawei linked to plan to sell restricted equipment to Iran
65 Exclusive: Huawei partner offered embargoed HP gear to Iran
=============================================
10 UK's Pearson invests in Barnes & Noble's Nook
73 Pearson to buy stake in Nook, Barnes & Noble shares up
=============================================
21 Kim Dotcom To Host Mega’s Launch Event At His New Mega Zealand Mansion Next Month
94 Kim Dotcom to host Mega's launch event at his New Zealand mega mansion next month
=============================================
90 Data suggests Facebook's Poke has led to online buzz around Snapchat
93 Snapchat videos and photos can be saved after all
=============================================
14 Apple CEO Tim Cook paid $4.16 million
41 Apple's Tim Cook sees his 2012 pay fall 99 percent
=============================================
8 Netflix CEO gets pay bump after 2012 cut
72 Netflix increases CEO Hastings' 2013 salary to $4 million
=============================================
95 Nissan Leaf owners can get their batteries refreshed under new warranty options
99 Japanese security firm to take to the sky in 2014, will unleash crime-fighting drones
=============================================
83 Windows RT ported to HTC HD2
77 Samsung Galaxy S III extended battery arriving in Germany on January 5
92 RumorTT: Samsung Galaxy S IV rumored for April Launch with integrated S Pen
75 Unnamed BlackBerry 10 device slides through FCC with AT&T-capable LTE abilities
98 ZTE aiming for thinner 5-inch 1080p-capable smartphone, Grand S has grand dreams
=============================================
40 HP confirms: Feds investigating the Autonomy acquisition
52 US Department of Justice is investigating allegations of accounting fraud at Autonomy
11 HP says gov't investigating troubled Autonomy unit
39 Autonomy founder fires back at HP after news of DOJ inquiry
=============================================
55 Top 10 broadband stories of 2012
63 BT wins BDUK broadband contract in Norfolk
=============================================
97 RIM's first patent payment to Nokia: $65 million
84 Fujitsu blames weak Windows 8 demand for poor sales
87 Cheezburger network raises $5M in funding for LOLcats, FAIL Blog, others
79 Samsung has big plans for Silicon Valley with 1.1 million square foot R&D center
89 Apple drops patent claim against Samsung Galaxy S III Mini

The complete output is here

To sort the items within a cluster, the final news aggragator step, one could sort by PageRank using code such as this.

We see good cluster around Huawei, Person, Kim Dotcom, Facebook/Snapchat, Tim Cook, and Netflix. All of those seem reasonable. The next is spurious. Next is a broader cluster around smartphones. The next is a good four story cluster around the HP and Autonomy scandal. The next cluster relates to broadband and the documents are related but not really about the same material. Finally, there is a cluster around "millions" which cover both dollars and square foot.

The results are not great. I'm not going to build a techmeme.com competitor from this. However, they are surprisingly respectable for a first pass and for so few lines of code. 

To try to improve results, I tried doing a weighted TF-IDF. That is, I suspect that words in the title are relatively more important than those in the description. Thus, I set a weight 1 for each word in the description and weight title_wt > 1 for each word in the title. When we compute term frequency I sum the weights for the words present rather than counts. The code for this change is here

I did not find that this improved results. The dominant parameter by far was the number of keywords from each document (nkeywords=4 above). Increasing this dramatically increases false positives (i.e. larger mixed clusters, such as the million cluster above). Reducing it below 4 led to very few clusters (for a given document similarity threshold). Actually, I was surprised that the optimal value of 4 was so low. I had expected a larger number would have been needed to disambiguate different the articles.

A potential improvement that I have not tried is stemming the words. In the sample keyword output above you might have noticed the following:

99 KEYWORDS drones soon 2014 drone

Drones and drone are the same noun. Had I stemmed these it would have increased the TF-IDF of drone and created a space for an additional keyword.

I may be able to tweak my current code and improve results. I could try different parameters, try different similarity metrics, different clustering schemes but it is not clear how good the results could be. Could I match a human curator? According to Gabe Rivera, the founder of Techmeme.com, algorithms are not currently a match for humans. His site employs a small team of editors [http://techmeme.com/about] to curate the top news:

Our experience leads us to believe that a thoughtful combination of both algorithmic and human editing offers the best means for curating in a space as broad as technology.

I somehow started thinking about news aggregators just a few days before prismatic got a large round of funding. In the days that followed there was some debate about algorithms versus humans. Here is Techcrunch:
News aggregation is a glory-less pursuit, where even the companies that do succeed have to grapple with the fact that no algorithm is ever going to be as good as the human brain at judging what news/content will appeal to said brain
And Gabe Rivera (from Techmeme.com): 
"A lot of people who think they can go all the way with the automated approach fail to realize a news story has become obsolete," said Rivera, explaining that an article can be quickly superseded even if it receives a million links or tweets. from gigaom 
and paraphrasing:
Some decisions, after all, are best left to humans.  When a news story breaks, for example, or a much-hyped product launches, a wealth of news coverage follows.  Which is the best version?  While Techmeme’s algorithms often derive this answer from “signals” on the Internet, sometimes surfacing the right story for the audience — perhaps the most comprehensive take or the most nuanced coverage — requires human intervention. outbrain.com
I should note that the experience of introducing direct editing has been a revelation even for us, despite the fact that we planned it. Interacting directly with an automated news engine makes it clear that the human+algorithm combo can curate news far more effectively that the individual human or algorithmic parts. It really feels like the age of the news cyborg has arrived. Our goal is apply this new capability to producing the clearest and most useful tech news overview available
(See also this thread.)

As mentioned earlier, news aggregation is not just about clustering similar stories. There are really three steps: story selection, clustering like documents, and selecting best item from each cluster. This is certainly a set of hard problems. Which stories might be of interest depends on the audience and other contexts. Clustering similar documents is probably the easiest subtask and that is fraught with problems, e.g. that a bag-of-words model does not distinguish millions of dollars versus millions of square feet. (Thus, a n-gram model might be another possible approach.) There is a sea of potential news sources, they have different foci, reputations come and go, quality changes, news organization editors vary and change too. As Gabe mentions, there is a temporal aspect too where there is a window of interest before the news get stale.

There is no reason that an algorithm or suite of algorithms could not do this well. There are certainly a lot of social cues now that an algorithm could use to identify trending, interesting stories: the number of likes, shares or retweets; the reputation of those sharing the news; the number or rate of comments on those stories etc. To understand the audience and refine itself, an algorithm could use direct user feedback and again shares / like / retweets of those top stories. However, it remains a hard problem. With the big news sources covering core issues, smaller news operations have to work a different angle and sometimes those are the most interesting. Journalists are not consistent: they might write a blazing article one day and a mediocre the next. Thus, for now the algorithmic base + human editors or a pure crowd-sourced filtering (such as reddit) do seem to produce the best quality results.

Thursday, December 27, 2012

Detecting credible tweets


There has been some buzz in the last week or so around a paper detecting the credibility of tweets. As Twitter has grown, so has spam. Moreover, during crises such as tsunamis, the Arab spring, the UK riots and so on, one sees deliberate strategies to spread disinformation. The effects of this can range from inconvenience, to wasted time and resources to being downright dangerous. Thus, a system to filter out such disinformation would be valuable for media, first responders and others. 

In Credibility Ranking of Tweets during High Impact Events, PSOM 2012, Gupta and Kamaguru attempted to identify features that helped identify credible information versus opinion versus spam. The built a supervised learning model using features of the tweets and the senders and ran those through a regression model and re-ranking scheme that they claim provided a reasonable predictive ability.

To do this, they collected 35 million tweets from 6 million users based on search terms from trending topics during summer 2011. These were high impact events meaning they produced at least 25k tweets and persisted as a trending topic for more than 48 hours. They then selected 14 trending topics---including UK riots, Hurricane Irene, Steve Jobs resignation and the bomb blast in Mumbai---and annotated the associated tweets to flag whether each tweet was credible. In short, a human had to decide whether each tweet was definitely credible / seems credible / definitely incredible / I can't decide and whether it contained information or not, was unrelated to the news event, or was spam. An example of the three classes is shown below.



They found that 30% of tweets provided information, 17% was credible and 14% was spam. They then proceeded to extract a large set of features from the tweets, the sender and their user profile. These features included:

Tweet: number of words, number of characters, whether it was a retweet, whether it contained a URL, whether it contained emoticons, number of hashtags etc.

Sender: age, number of followers, number of friends, description length, whether it was a verified account etc.

They performed a logistic regression (1 = credible, 0 = not credible/spam/opinion) and found the following significant indicators (p < 0.001):
  • number of characters and unique characters present in tweet --- likely tweets with hashtags, @mentions and URLs contain more unique characters and so are more informative
  • presence of  swear words --- indicates opinion
  • inclusion of pronouns and presence of sad / happy emoticons --- fact based tweets are less personal
  • p<0.01: presence of URL --- often linked to images, video and resources related to the event.
They found that "low number of happy emoticons [:-), :)] and high number of sad emoticons [:-(, :(] act as strong predictors of credibility". Now, most of the high impact (arguably 13/14) events in their list are negative so one could imagine that for a general model that covered both positive and negative events, this feature would be use of emoticons: happy emoticons for happy events, sad emoticons for sad events. However, one could also argue that use of happy emoticons such as :-) could mean a joke, that the comment is not credible. We will see data from a second study below that finds the same results so it is more likely to be the latter: sad emoticons only are strongly linked to credible information.

They also used SVM and pseudo relevance feedback (PRF) to improve results. They ranked documents using RankSVM and using a unigram model identified the top K tweets and reranked them using a BM25 similarity metric. (This metric is a function of TF, IDF, K, tweet length, and some constants. See paper for full description.) To evaluate the results they used the Normalized Discounted Cumulative Gain metric. For the top 25 tweets, they achieved 0.37 NDCG, a statistically significant (p < 0.05) improvement over non ranked data. Unfortunately, there is scant further details about their analysis. I have no idea as to the accuracy, precision, recall or F-scores of their classifier. Saying that an additional layer of analysis improved results is not as useful as detailing fully how good the initial results were. 

Gupta and Kamaguru are not the only ones working in this field. Castillo et al. (2011) published a more compelling paper entitled "Information Credibility on Twitter" with a more detailed statistical analysis. They too sucked up tweets from trending topics, this time over a 2 month period. They used Mechanical Turk to annotate the newsworthiness and credibility and asked the evaluators to write a sentence justifying their assessment. The extracted a sets of features, similar but larger and more detailed than above.

They tried a number of learning schemes including SVM, decision trees, decision rules, and Bayes networks. Results across these techniques were comparable with the best coming from a J48 decision tree. Their J48 classifier achieves an accuracy equal to 89%.  For detecting newsworthy tweets they achieved an impressive F1 score of 0.924. For predicting credibility they also found that a J48 tree worked best and found the following significant features: presence of questions mark (non credible), sentiment (positive: no credible, negative: credible), presence of negative emoticons (credible). More active users are more credible (I wonder if this is related to professional media organizations such as CNN, Reuters, local news organizatins whose business it is to be both credible and active.) Together, these feature produced a model with 86% accuracy. See table below.


In both these studies, the features make sense. Active users, tweeting frequently with lots of followers, linking to relevant material and who have a reasonably fleshed out user profile. With 86% accuracy from a balanced training set, such features and associated model could form the basis of a reasonably good filter system for a twitter stream and in the case of emergency services could save valuable resources and possibly lives.

P.S. it is worth checking out the truthiness system from http://truthy.indiana.edu/

Friday, December 14, 2012

On the unreasonable effectiveness of data: why are more data better?

Update 11/30/14: I incorrectly named the authors of the "unreasonable effectiveness study, now corrected.

In the paper "The unreasonable effectiveness of data" [Halevy et al, 2009], Halevy, Norvig and Pererira, all from Google, argue that interesting things happen when corpora get to web scale:
"simple models and a lot of data trump more elaborate models based on less data".
In that paper and the more detailed tech talk given by Norvig, they demonstrate that when corpora get to hundreds of millions or trillions of training sample or words, very simple models with basic independence assumptions can outperform more complex models such as those based on carefully-crafted ontologies with smaller data. However, they provided relatively little explanation as to why more data are better. In this post, I want to attempt to think through that. 

I propose that there are several classes of problems and reasons for why more data are better. 

The first are nearest neighbor type problems. Halevy et al. mention "James Hay and Alexei A. Efros addressed the task of scene completion: removing an unwanted, unsightly automobile or ex-spouse from a photograph and filling in the background with pixels taken from a large corpus of other photos." [Hays and Efros, 2007 ]



Norvig presented the following schematic: 


and described it as a "data threshold" in which results went from really bad to really good.

I'm not convinced that there is any threshold or anything that resembles a phase transition. This seems to me to be a problem of finding the closest match. The more data, the closer the expected match.

Hays and Efros (2007) mention:
"Indeed, our initial experiments with the gist descriptor on a dataset of ten thousand images were very discouraging. However, increasing the image collection to two million yielded a qualitative leap in performance... Independently, Torralba et al. [2007] have observed a similar effect with a dataset of up to 70 million tiny (32x32) images...It takes a large amount of data for our method to succeed. We saw dramatic improvement when moving from ten thousand to two million images" [Hays and Efros, 2007]
There is a large difference in those corpus sizes and a "qualitative leap" is not the same as a threshold (sensu phase transition).

More data can dramatically affect the metrics from simple effects. For instance, consider a sample of size n from a standard normal. How does the minimum of that sample vary with n? Let's create samples of different sizes and plot the minimum using the following R code

x<-seq(1,7,0.5)
y<-vector(mode="numeric",length=length(x))
for (i in 1:length(x)){ y[i] <- min(rnorm(10^(x[i]))) }
plot(x,y,xlab="Sample size, n (log10 scale)",ylab="Minimum value of sample",type="b")


The minimum decreases loglinearly. This is a case of extrema from an unbounded tail. Perhaps more relevantly, here, for a minimization problem such as scene matching there is a lower bound: for all intents and purposes, a perfect match. For instance, perhaps someone else stood in the same exact tourist spot and took a picture of the exact same scene but without the obstructing car. 

I think this is what is happening in Norvig's schematic. At a certain corpus size we've found a really good match and a larger corpus doesn't and cannot improve results.

In summary, for nearest neighbor type minimization problems with a non-negative distance function (meaning that the cost function has a lower bound of zero), that distance function will, on average, decrease monotonically with data or sample size. 

The second class are counting or relative frequency problems. These were the primary focus of Halevy et al. Norvig's presented a few examples. In segmentation, the task is to split a string such as "cheapdealsandstuff.com" into the most likely sequence of words. These strings are short enough to do a brute force in terms of possible partition, but for each partition we have to assess the likelihood of that partition. The simplest assumption is to assume independence among words. Thus, if Pr(w) is the probability of a word w given some corpus we can compute, say

Pr(che,apdeals,andstuff) = Pr(che) * Pr(apdeals) * Pr(andstuff).
...
Pr(cheap,deals,and,stuff) = Pr(cheap) * Pr(deals) * Pr(and) * Pr(stuff).

One can of course also use n-grams, e.g. for bigrams: Pr("cheap deals") * Pr("and stuff")

A second example that Norvig covered was spell checking. Here we can take a misspelled word and compute the likelihood of the possible variants to suggest the most likely form. 

In both cases, we need a corpus that includes both common and uncommon phrases and we need counts of the occurrences of those phrases to compute relative frequency. The larger and more comprehensive the corpus, the better. I think that there are two statistical effects going on here:
  • the larger the corpus, the better the quality of the estimate of the relative frequency. This is the law of large numbers.
  • the larger the corpus, the more likely it is to include unusual phrases, i.e. the long tail. This is an unbounded effect. The more of the web is indexed, there will always be new, never-seen-before phrases. The problem is exacerbated in that the distribution of words in the english language is a power law. (e.g. Zipf, G. The Psycho-Biology of Language. Houghton Mifflin, Boston, MA, 1935.). This means that the tail is especially long and thus especially large samples are needed to capture those rare phrases.
A third class are estimating univariate distribution problems. I recently encountered an interesting example while attending a tech talk from Peter Skomoroch from LinkedIn. He showed a plot of likelihood of a member having particular software-related job titles versus number of months since graduation. What we see from the data is that the distributions of "Sr Software engineer" and "senior software engineer" are almost identical, as one would expect as they are synonyms, as are "CTO" and "chief technology officer". This then presents an interesting way of identifying synonyms and so deduping the data rather than maintaing a huge master list of acronyms and abbreviations. This is only possible because of the scale of data that they have where the distribution they can generate is reliable and presumably close to the true underlying population distribution.


A fourth class are general multivariate or correlational problems in which we are trying to estimate the relationship among variables. This could be estimating the relationship y = f(x) or perhaps estimating the joint pdf of many variables. We might use this for word sense disambiguation (e.g. is the document referring to pike the fish or pike the pointed weapon?) or to build up a dossier of associated features or concepts about an entity (e.g. a company has an associated CEO, head office, tax ID and so on). Here we are interested in the correlations among words or phrases. The problem is that web documents are very high dimensional and embarking on high dimensional problems like these we are under the curse of dimensionality, that data become very sparse. Thus, one effect of larger samples is to increase the density of data across state space. Again, with larger samples we can estimate metrics such as location metrics (mean, median and other metrics of the center of distributions) more accurately. We can also estimate joint pdfs more accurately. Below is a simple example from the following code:

par(mfrow=c(1,2))
plot(mvrnorm(100, mu = c(0, 0), Sigma = matrix(c(1, .9, .9, 1), 2)),xlab="X",ylab="Y",ylim=c(-4,4))
title("n = 100")
plot(mvrnorm(10000, mu = c(0, 0), Sigma = matrix(c(1, .9, .9, 1), 2)),xlab="X",ylab="Y",ylim=c(-4,4))
title("n = 10000")



At left is a small sample. It could easily be interpreted as linear. At right, with a larger sample, the true underlying bivariate normal is more obvious. Clearly, this is a trivial example. The point is that for higher dimensions you will much larger sample sizes to estimate the joint pdfs well.

This is clearly a cursory answer as to why more data are better. Quality data are still preferred. However, for many organizations, such as Google, Twitter, LinkedIn and Facebook where content is user-generated, is often free-form text and/or covers many domains (and so deep-cleaning and the use of ontologies is infeasible) then what we see that having very large datasets can compensate for the noise. It all evens out in the end and in the case of nearest neighbor problems, the solution will always be better.


Monday, December 3, 2012

What is a data scientist?


Over the last year, there has been a lot of discussion about what is data science and who are data scientists. There are those that think that a data scientist is simply a glorified, over-hyped term for a business intelligence analyst. On the other hand, there are those that equate it with big data. I think that neither of those interpretations are correct and here is my take.

I like to think of it in the following manner: imagine a triangle. In one corner are business intelligence (BI) analysts. In a second corner are data engineers and in the third corner are applied statisticians. 


Each corner represents an extreme form of those roles. For BI, it might represent someone who knows a little SQL and does everything else in Excel. There is nothing wrong with this at all, Excel can get you far, but my underlying point is that most BI analysts do much more than this, they protude much further into the triangle. The extreme data engineer might be someone who owns some data processing system, is agnostic about the data flowing through their system and cares solely about gathering, throughput, scaling, peak loads, data gathering logging and so on. Again, most data engineers do much more than this. Finally, the extreme applied statistician might be someone involved in statistical modeling or algorithm development. For instance, they might study recommender systems with toy datasets somewhat in a vacuum from real problems. Again, most statisticians have a broader role and set of skills than this.

While BI is mostly in their corner of the triangle, there are some BI analysts that overlap significantly with data engineers. They are involved in data gathering, architectural decisions about data storage, processing, visualization solutions and replication. They may query with big data technologies such as pig, hive and so on. Some BI analysts may also stray significantly towards the data scientist corner. For instance, they may be involved in some sort of predictive modeling, significant tests, experimental design, may use tools such as R and so on.

Data engineers may show some overlap with BI, perhaps implementing tools for data analysis, doing some analyses, care about the meaning and value of the data in their systems. They may also work closely with data scientists. For instance, they may take prototypes and algorithms implemented by the data scientists and refactor them or completely implement them in a new language to scale. They may understand the core algorithms sufficiently well to improve them for faster performance or better results.


Data scientists are a group of people that cover some central swathe of the triangle, the blue area of triangle (which is purely illustrative). Within that we might find specializations: the machine learners, the data miners, the recsys guys. For now, let's label them collectively as data scientists.

Some data scientists may stray towards BI and write retrospective analyses and, yes, may even use Excel. They may also work significantly with big data and work with map reduce, pig, hive, kafka, fluentd etc. and be concerned with robust, scalable, high throughput systems. They may also be concerned with algorithm development, predictive analytics and a while lot more. No single person, however, covers all these areas. They are will cover some proportion of the area. Thus, one data scientist may be represented by the orange blobs of the triangle while a different data scientist may be represented by the red blob. 


This then, I think, is the root of some of the problem here. Job ads for data scientists tend to cover the large blue area. They have requirements or nice-to-haves that cover predictive analytics, recommenders, big data, A/B testing and coding. And, certainly when I interview data scientists, I will often touch upon all of these, especially if they make it to an onsite interview. However, I don't expect that any single candidate to be great or have experience in all these areas. If they did, their experience would be so shallow so as to be useless. The trick then is to define the area of the triangle you need to cover with your data science team and then hire staff that collectively, not necessarily individually, cover that area.

What then is a data scientist? My favorite definition comes from Josh Wills:
Data Scientists (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician.
While inadequate, this definition does capture the essence of the idea that there is a continuum in terms of skills and experience. They are not at the extremes of the triangle but each person is a bit of this and a bit of that. It also captures the idea that a typical (non-extreme) data scientist must be able to code. It might be in R or matlab but it must be code and not solely equations. More preferable is some scripting language such as ruby or python that allow rapid prototyping, and the ability to tap into libraries such as numpy, scipy and NLTK. If you can't code, you are unlikely to get too far as a data scientist. Some discuss data scientists' roles as those that find the needle in the haystacks or the gold in the hills. To do that, however, is going to require some code to automate processing or to learn parameters of some model. 

I think data science is more about building data products and new data-driven features and services. I don't mean shuffling existing data around as web services but generating new data that drives a feature or product that, in turn, drives value to the organization. This might be an internal sales forecast, website personalization or a recommendation. In my role at onekingslane.com, some of my time involves image processing, generating entirely new data from product images which can drive new site features such as faceted search or deeper BI analyses.

Data science can involve big data but not necessarily. If you are Facebook, Twitter, or LinkedIn with a large and very active user base then it is a necessity. But big data is not a necessary condition for data science. In my last last post, we built a very respectable sentiment analyzer with less than a 100Kb training set. Quality data are preferred but if one has big data to offer, we'll take it.

In summary, data scientists vary. They cover a lot of territory, certainly collectively but not necessarily individually. Some are more statistical, some are more analytical and into data visualization, and some are more computational. While they may overlap with BI and data engineers, they are not the same, are complementary, and do possess a certain novel mix of skills and talents that can provide significant value to their organizations.



Thursday, November 29, 2012

Introduction to Sentiment Analysis

In this post we are going to explore sentiment analysis using python. Let us first define the problem. Given a corpus of documents, we want to train a classifier to classify documents into one of several classes:

positive: e.g., "that movie was awesome"
negative: e.g., "that movie sucked"

We may also want a third category:

neutral: e.g., "I saw the movie at the Odeon".

In reality, we may have a fourth category unknown where we cannot tell. For instance, it may contain a document in a foreign language, a set of stop words only, or a set of words that our classifier does not understand, e.g. SMS acronyms (e.g., "cu l8r"). For simplicity, we can simply not classify these documents or we can categorize them as neutral.

To train the classifier we will need to choose a set of features, the facets of data that the classifier will work on. The simplest scenario is to use each word in the document. (Later, we will discuss more complex features and other features.)

This is a supervised learning problem so we will need a labeled corpus of data. That is, a set of documents each of which has been labeled with one of our class names or IDs. One of the simplest ways is a CSV or a TSV file:

pos     that movie was awesome
neu     I saw the movie at the Odeon
pos     I love this place
pos     I am happy
....

We will need a learner, an algorithm that will learn to classify. In most examples that I've seen online, people have used a naive Bayes classifier but there are many others that one could choose.

Importantly, sentiment analysis is very context sensitive. If you train a classifier using movie review data, it likely will not fare well classifying documents about election results or your startup's product. A classifier trained on US english tweets may or may not classify UK tweets well.

Finally, we cannot expect to assign sentiment correctly 100% of the time. Even humans can often disagree about the sentiment of documents. There are several reasons:
  • Cultural differences mentioned above. For example, "rocking" and "sick" are positive adjectives to members of some demographics and cultures but not others.
  • English is a hard and ambiguous language. For instance, consider Chomsky's example: "old men and women". Is that [old men] and [old women] or does it mean [old men] and [women]. Another example is Eats, Shoots & Leaves.
  • Sarcasm, innuendo, and double entendres. This is one of the current challenges in NLP. For a fun example take a look at the problem of detecting that's what she said.
  • That people often use a bag of words model for sentiment analysis, at least as a first pass. That is, we analyze a document as a set of words and not a phrase. Thus, we will miss that the "not" in "not good" negates "good". In general, we will miss double negatives and other qualifiers. I love this illustrative example:
During a lecture the Oxford linguistic philosopher J.L. Austin made the claim that although a double negative in English implies a positive meaning, there is no language in which a double positive implies a negative. To which Morgenbesser responded in a dismissive tone, "Yeah, yeah."
Now we have the background out of the way, let's starting building a concrete example.

First, let's grab a corpus. I'm taking the UMICH SI650 - Sentiment Classification training set from Kaggle.

This is a tab-delimited file with 7086 sentences tagged as 1 or 0. 

head training.txt
1     The Da Vinci Code book is just awesome.
1     this was the first clive cussler i've ever read, but even books like Relic, and Da Vinci code were more plausible than this.
1     i liked the Da Vinci Code a lot.
1     i liked the Da Vinci Code a lot.
1     I liked the Da Vinci Code but it ultimatly didn't seem to hold it's own.
1     that's not even an exaggeration ) and at midnight we went to Wal-Mart to buy the Da Vinci Code, which is amazing of course.
1     I loved the Da Vinci Code, but now I want something better and different!..
1     i thought da vinci code was great, same with kite runner.
1     The Da Vinci Code is actually a good movie...
1     I thought the Da Vinci Code was a pretty good book.

There are many duplicates. Let's remove them:

cat training.txt | sort | uniq > uniq_training.txt

How many positive and negative samples remain?

cat uniq_training.txt | grep ^1 | wc -l
772
cat uniq_training.txt | grep ^0 | wc -l
639

We need to extract features from a document. We'll take unique, lowercase words with more than two characters:

def extract_features(document):
   features={}
   for word in set(document.split()):
       if len(word) > 2:
          features['contains(%s)' % word.lower()] = True
   return features

>>> extract_features('" A couple of very liberal people I know thought Brokeback Mountain was " stupid exploitation.')
{'contains(very)': True, 'contains(people)': True, 'contains(couple)': True, 'contains(mountain)': True, 'contains(was)': True, 'contains(brokeback)': True, 'contains(liberal)': True, 'contains(exploitation.)': True, 'contains(know)': True, 'contains(thought)': True, 'contains(stupid)': True}

We need to read in our documents:

documents=[]
f = open("uniq_training.txt","r")
for document in f.readlines():
   parts= document.strip().split("\t")
   documents.append((parts[1],bool(int(parts[0]))))

>>> documents[0]
('" A couple of very liberal people I know thought Brokeback Mountain was " stupid exploitation.', True)

Now extract features from each document in our corpus and split into a training set (80%) and a test set (20%):

import random
random.seed(1234) #so that you can reproduce my results if you wish
random.shuffle(documents)
import nltk
n_train = int(0.8*len(documents))
training_set = nltk.classify.apply_features(extract_features,documents[:n_train])
test_set = nltk.classify.apply_features(extract_features,documents[n_train:])

>>> training_set[0]
({'contains(very)': True, 'contains(people)': True, 'contains(couple)': True, 'contains(mountain)': True, 'contains(was)': True, 'contains(brokeback)': True, 'contains(liberal)': True, 'contains(exploitation.)': True, 'contains(know)': True, 'contains(thought)': True, 'contains(stupid)': True}, True)

Finally, let's now train our classifier:

>>> classifier = nltk.NaiveBayesClassifier.train(training_set)

>>> nltk.classify.accuracy(classifier, test_set)
0.911660777385159

Whoah, 91% accuracy isn't bad at all given that we had a fairly balanced training set:

ct_pos=0
for d in training_set:
    if d[1]==True: ct_pos+=1

print ct_pos, len(training_set)-ct_pos
617 511

By that I mean, I would have been suspicious if we had a very imbalanced training set and say 95% of samples were positive.

The proper way to assess the performance is to examine precision, recall and F-scores:


import collections
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
for i, (feats, label) in enumerate(test_set):
    refsets[label].add(i)
    observed = classifier.classify(feats)
    testsets[observed].add(i)


>>> print 'pos precision: %2.3f' % nltk.metrics.precision(refsets[True], testsets[True])
pos precision: 0.951
>>> print 'pos recall: %2.3f' % nltk.metrics.recall(refsets[True], testsets[True])
pos recall: 0.884
>>> print 'pos F-measure: %2.3f' % nltk.metrics.f_measure(refsets[True], testsets[True])
pos F-measure: 0.916
>>> print 'neg precision: %2.3f' % nltk.metrics.precision(refsets[False], testsets[False])
neg precision: 0.871
>>> print 'neg recall: %2.3f' % nltk.metrics.recall(refsets[False], testsets[False])
neg recall: 0.945
>>> print 'neg F-measure: %2.3f' % nltk.metrics.f_measure(refsets[False], testsets[False])
neg F-measure: 0.906

and now the confusion matrix:

observed=[]
actual=[]
for i, (feats, label) in enumerate(test_set):
    actual.append(label)
    observed.append(classifier.classify(feats)) 
print nltk.ConfusionMatrix(actual,observed)

      |   F     |
      |   a   T |
      |   l   r |
      |   s   u |
      |   e   e |
------+---------+
False |<121>  7 |
 True |  18<137>|
------+---------+
(row = reference; col = test)



These are all great numbers.

Now we can use it:

>>> classifier.classify(extract_features("that movie was awful"))
False
>>> classifier.classify(extract_features("that movie was great"))
True

>>> classifier.show_most_informative_features(16)
Most Informative Features
       contains(awesome) = True             True : False  =     37.8 : 1.0
        contains(sucked) = True            False : True   =     31.0 : 1.0
          contains(hate) = True            False : True   =     24.0 : 1.0
          contains(love) = True             True : False  =     17.8 : 1.0
         contains(heard) = True            False : True   =     10.1 : 1.0
         contains(kinda) = True            False : True   =      8.4 : 1.0
           contains(it,) = True            False : True   =      7.6 : 1.0
          contains(want) = True             True : False  =      6.9 : 1.0
          contains(evil) = True            False : True   =      5.6 : 1.0
         contains(those) = True            False : True   =      5.2 : 1.0
        contains(didn't) = True            False : True   =      5.2 : 1.0
           contains(has) = True            False : True   =      5.2 : 1.0
         contains(think) = True            False : True   =      4.7 : 1.0
          contains(miss) = True             True : False  =      4.7 : 1.0
         contains(watch) = True            False : True   =      4.6 : 1.0
         contains(liked) = True             True : False  =      4.5 : 1.0
>>> 

These terms make a lot of sense: awesome, suck, hate, love...

For a more detailed tutorial using NLTK see offical docs. See also this post.

We trained our classifier to learn keywords that have particularly positive or negative associations (the most informative features). Once it has learned that list it is then easy to apply to a new input document. There are list of these learned words available such as AFINN and pattern. For these then, running sentiment analysis is trivial. 

In pattern, "the sentiment() function returns a (polarity, subjectivity)-tuple for the given sentence (based on the adjectives in it), with polarity between -1.0 and 1.0 and subjectivity between 0.0 and 1.0".

Let's fire up a sentiment analyzer and use it

>>> from pattern.en import sentiment
>>> print sentiment("that movie was awesome")
(1.0, 1.0)
>>> print sentiment("that movie sucked")
(0, 0)
>>> print sentiment("that movie was ok")
(0.5, 0.5)
>>> print sentiment("that movie was fairly good")
(0.6, 0.8500000000000001)

That's all there is to it.

You can see that returning a continuous value from (-1,1) means that we could easily define neutral documents as those with some intermediate value, say (-0.1,0.1).

Let's return to the issue mentioned earlier that this is just a bag of words model and the issue "not good" versus "good". One thing we can do is to include bigrams in our features. That is, adjacent pairs of words:

>>> from nltk import bigrams
>>> bigrams("That movie was awful".split())
[('That', 'movie'), ('movie', 'was'), ('was', 'awful')]

Let's add bigrams to our extract features function:

def extract_features(document):
   features={}
   for word in set(document.split()):
       if len(word) > 2:
          features['contains(%s)' % word.lower()] = True
   for bigram in bigrams(document.split()):
       features['contains(%s)' % "_".join(i.lower() for i in bigram)] = True
   return features

Retraining our classifier with this, we get 93.6% accuracy and a more interesting feature list: I hate, I love, I think, love the...

>>> classifier.show_most_informative_features(16)

Most Informative Features
       contains(awesome) = True             True : False  =     37.8 : 1.0
        contains(sucked) = True            False : True   =     31.0 : 1.0
          contains(hate) = True            False : True   =     24.0 : 1.0
      contains(love_the) = True             True : False  =     22.4 : 1.0
        contains(i_hate) = True            False : True   =     22.2 : 1.0
        contains(i_love) = True             True : False  =     22.1 : 1.0
          contains(love) = True             True : False  =     17.8 : 1.0
      contains(code_was) = True             True : False  =     15.7 : 1.0
       contains(i_think) = True            False : True   =     14.9 : 1.0
         contains(3_was) = True             True : False  =     13.0 : 1.0
        contains(i_like) = True             True : False  =     11.9 : 1.0
    contains(like_harry) = True             True : False  =     10.2 : 1.0
         contains(heard) = True            False : True   =     10.1 : 1.0
       contains(i_heard) = True            False : True   =      9.3 : 1.0
   contains(the_mission) = True             True : False  =      8.6 : 1.0
         contains(kinda) = True            False : True   =      8.4 : 1.0

Hopefully you can see that you can get some good sentiment results with just a few tens of lines of python, if you have the right libraries.

Here is the source code: p-value.info github