Lab 4 - Pageviews

Professor Brian Keegan
Department of Information Science, CU Boulder
This notebook is copyright and made available under the Apache License v2.0 license.

This is the third of five lab notebooks that will explore how to analyze the structure of collaborations in Wikipedia data about users' revisions across multiple articles. This lab will extend the methods in the previous two labs about analyzing a single article's revision histories and analyzing the hyperlink networks around a single Wikipedia page. You do not need to be fluent in either to complete the lab, but there are many options for extending the analyses we do here by using more advanced queries and scripting methods.

Acknowledgements
I'd like to thank the Wikimedia Foundation for the PAWS system and related Wikitech infrastructure that this workbook runs within. Yuvi Panda, Aaron Halfaker, Jonathan Morgan, and Dario Taraborelli have all provided crucial support and feedback.

Confirm that basic Python commands work

a = 3
b = 4
c = 5
(c-a)**b
16

Import modules and setup environment

Load up all the libraries we'll need to connect to the database, retreive information for analysis, and visualize results.

# Makes the plots appear within the notebook
%matplotlib inline

# Two fundamental packages for doing data manipulation
import numpy as np                   # http://www.numpy.org/
import pandas as pd                  # http://pandas.pydata.org/

# Two related packages for plotting data
import matplotlib.pyplot as plt      # http://matplotlib.org/
import seaborn as sb                 # https://stanford.edu/~mwaskom/software/seaborn/

# Package for requesting data via the web and parsing resulting JSON
import requests                      # http://docs.python-requests.org/en/master/
import json                          # https://docs.python.org/3/library/json.html
from bs4 import BeautifulSoup        # https://www.crummy.com/software/BeautifulSoup/bs4/doc/

# Two packages for accessing the MySQL server
import pymysql                       # http://pymysql.readthedocs.io/en/latest/
import os                            # https://docs.python.org/3.4/library/os.html

# Setup the code environment to use plots with a white background and DataFrames show more columns and rows
sb.set_style('whitegrid')
pd.options.display.max_columns = 100
pd.options.display.max_rows = 110

Define an article to examine pageview dynamics.

page_title = 'Cyclone Pam'

Get pageview data for a single article

Details about the Wikimedia REST API for pageviews is available here. Unfortunately, this data end point only provides information going back to July 1, 2015.

This is what the API returns as an example.

# Get today's date and yesterday's date
today = pd.datetime.today()
yesterday = pd.datetime.today() - pd.to_timedelta('1 day')

# Date
today_date_s = str(today.date())
yesterday_date_s = str(yesterday.date())

# Convert to strings
today_s = pd.datetime.strftime(today,'%Y%m%d00')
yesterday_s = pd.datetime.strftime(yesterday,'%Y%m%d00')

# Get the pageviews for today and yesterday
url_string = 'http://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/all-agents/{0}/daily/{1}/{2}'
print(url_string.format(page_title.replace(' ','_'),yesterday_s,today_s))
http://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/all-agents/Cyclone_Pam/daily/2016103000/2016103100

Write a function to get the pageviews from January 1, 2015 (in practice, the start date will be as late as August or as early as May) until yesterday.

def get_daily_pageviews(page_title,today_s):
    url_string = 'http://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/all-agents/{0}/daily/2015010100/{1}'
    req = requests.get(url_string.format(page_title,today_s))

    json_s = json.loads(req.text)
    if 'items' in json_s.keys():
        _df = pd.DataFrame(json_s['items'])[['timestamp','views','article']]
        _df['timestamp'] = pd.to_datetime(_df['timestamp'],format='%Y%m%d00')
        _df['weekday'] = _df['timestamp'].apply(lambda x:x.weekday())
        return _df

Get the data for your page.

pageview_df = get_daily_pageviews(page_title,today_s)
pageview_df.head()
timestamp views article weekday
0 2015-07-01 267 Cyclone_Pam 2
1 2015-07-02 271 Cyclone_Pam 3
2 2015-07-03 164 Cyclone_Pam 4
3 2015-07-04 159 Cyclone_Pam 5
4 2015-07-05 135 Cyclone_Pam 6

Interpret page view results

What does the pageview activity look like? Are there any bursts of attention? What might these bursts be linked to?

ax = pageview_df.plot.line(x='timestamp',y='views',logy=False,legend=False)
ax.set_xlabel('')
ax.set_ylabel('Pageviews')
<matplotlib.text.Text at 0x7f936a674d30>

Use a logarithmic scaling for the y-axis to see more of the detail in the lower-traffic days.

ax = pageview_df.plot.line(x='timestamp',y='views',logy=True,legend=False)
ax.set_xlabel('')
ax.set_ylabel('Pageviews')
<matplotlib.text.Text at 0x7f93683fe1d0>

What are the dates for the biggest pageview outliers? Here we define an "outlier" to be more than 3 standard deviations above the average number of pageviews over the time window.

std_threshold = 4
threshold_val = pageview_df['views'].mean() + pageview_df['views'].std() * std_threshold
peak_days = pageview_df[pageview_df['views'] > threshold_val]

peak_days.head(10)
timestamp views article weekday
115 2015-10-24 532 Cyclone_Pam 5
234 2016-02-20 778 Cyclone_Pam 5
235 2016-02-21 641 Cyclone_Pam 6
236 2016-02-22 1015 Cyclone_Pam 0
237 2016-02-23 589 Cyclone_Pam 1
399 2016-08-03 610 Cyclone_Pam 2

How much of the total pageview activity occurred on these days compared to the rest of the pageviews?

peak_fraction = pageview_df.loc[peak_days.index,'views'].sum()/pageview_df['views'].sum()

print('{0:.1%} of all pageviews occurred on the {1} peak days.'.format(peak_fraction,len(peak_days)))
5.1% of all pageviews occurred on the 6 peak days.

How does pageview activity change over the course of a week?

g = sb.factorplot(x='weekday',y='views',data=pageview_df,kind='bar',color='grey',
                  aspect=1.67,estimator=np.median)
ax = g.axes[0][0]
ax.set_xticklabels(['Mon','Tue','Wed','Thu','Fri','Sat','Sun'],rotation=0)
ax.set_xlabel('')
ax.set_ylabel('Average pageviews')
<matplotlib.text.Text at 0x7f93684c5588>

Compare pageviews to another page

Lets write a function that takes a list of article names and returns a DataFrame indexed by date, columned by articles, and values being the number of pageviews.

def get_multiple_pageviews(page_list,today_s):
    multiple_pv_df = pd.DataFrame(index=pd.date_range('2015-05-01', today_date_s))
    for page in page_list:
        pv_df = get_daily_pageviews(page,today_s)
        try:
            multiple_pv_df[page] = pv_df.set_index('timestamp')['views'] 
        except:
            print("Error on: {0}".format(page))
            multiple_pv_df[page] = np.nan
    return multiple_pv_df.dropna(how='all')

Enter two related pages for which you want to compare their pageview behavior.

page_list = ['Cyclone Pam','Salt']

Get both of their data.

# Get the data
multiple_pvs = get_multiple_pageviews(page_list,today_s)

# Show the top rows
multiple_pvs.tail()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-11-eb4b710209ef> in <module>()
      1 # Get the data
----> 2 multiple_pvs = get_multiple_pageviews(page_list,today_s)
      3 
      4 # Show the top rows
      5 multiple_pvs.tail()

NameError: name 'get_multiple_pageviews' is not defined

Plot the data.

multiple_pvs.idxmax()
Cyclone Pam   2016-02-22
Vanuatu       2016-04-03
dtype: datetime64[ns]
multiple_pvs.plot(logy=True)
<matplotlib.axes._subplots.AxesSubplot at 0x7f9368351d68>

What is the correlation coefficient between these two articles' behavior?

multiple_pvs.apply(np.log).corr()
Cyclone Pam Vanuatu
Cyclone Pam 1.000000 0.268096
Vanuatu 0.268096 1.000000

How did the ratio between the two articles' pageviews change over time?

ratio_s = multiple_pvs[page_list[0]].div(multiple_pvs[page_list[1]])
ax = ratio_s.plot()
ax.set_ylabel('{0}/{1}'.format(page_list[0],page_list[1]))
<matplotlib.text.Text at 0x7f93668fab00>

Use the functions for resolving redirects and getting page outlinks from prior labs.

# From http://stackoverflow.com/a/312464/1574687
def make_chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in range(0, len(l), n):
        yield l[i:i + n]

def resolve_redirects(page_title_list):
    # Chunk the pages into a list of lists of size 50
    chunks = make_chunks(page_title_list,50)
    # Create an empty list to fill with the redirected titles
    redirected_page_titles = []
    # For each chunk try to get the redirects
    for chunk in chunks:
        # Create the query string that separates spaces within page titles by '+' 
        # and separates page titles by '|'
        page_titles_string = '|'.join([page.replace(' ','+') for page in chunk])
        # Put this large string into the URL
        url_string = 'https://en.wikipedia.org/w/api.php?action=query&format=json&prop=pageprops&titles={0}+&redirects=1'.format(page_titles_string)
        # Do the query and parse the JSON response into a dictionary
        req = json.loads(requests.get(url_string).text)
        # Convert the returned values containing redirects into a dictionary
        if 'redirects' in req['query'].keys():
            redirected_titles = {d['from']:d['to'] for d in req['query']['redirects']}
            # Add the redirected titles to the list
            for title in chunk:
                try:
                    #print(len(redirected_page_titles), title, redirected_titles[title])
                    redirected_page_titles.append(redirected_titles[title])
                # If they don't have a redirect just add the original title
                except KeyError:
                    #print(len(redirected_page_titles), '\nFrom: ', title, '\nTo: ', title)
                    redirected_page_titles.append(title)
        else:
            for title in chunk:
                redirected_page_titles.append(title)
    # Make sure the number of page titles remained the same, otherwise raise a warning
    if len(page_title_list) == len(redirected_page_titles):
        return redirected_page_titles
    else:
        print("WARNING! The number of page titles in the redirected list ({0}) is not equal to the input list ({1})".format(len(redirected_page_titles),len(page_title_list)))
        return redirected_page_titles

def get_page_outlinks(page_title,redirects=1):
    # Replace spaces with underscores
    #page_title = page_title.replace(' ','_')
    
    bad_titles = ['Special:','Wikipedia:','Help:','Template:','Category:','International Standard','Portal:','s:']
    
    # Get the response from the API for a query
    # After passing a page title, the API returns the HTML markup of the current article version within a JSON payload
    req = requests.get('https://en.wikipedia.org/w/api.php?action=parse&format=json&page={0}&redirects={1}&prop=text&disableeditsection=1&disabletoc=1'.format(page_title,redirects))
    
    # Read the response into JSON to parse and extract the HTML
    json_string = json.loads(req.text)
    
    # Initialize an empty list to store the links
    outlinks_list = [] 
    
    if 'parse' in json_string.keys():
        page_html = json_string['parse']['text']['*']

        # Parse the HTML into Beautiful Soup
        soup = BeautifulSoup(page_html,'lxml')

        # Delete tags associated with templates
        for tag in soup.find_all('tr'):
            tag.replace_with('')

        # For each paragraph tag, extract the titles within the links
        for para in soup.find_all('p'):
            for link in para.find_all('a'):
                if link.has_attr('title'):
                    title = link['title']
                    # Ignore links that aren't interesting
                    if all(bad not in title for bad in bad_titles):
                        outlinks_list.append(title)

        # For each unordered list, extract the titles within the child links
        for unordered_list in soup.find_all('ul'):
            for item in unordered_list.find_all('li'):
                for link in item.find_all('a'):
                    if link.has_attr('title'):
                        title = link['title']
                        # Ignore links that aren't interesting
                        if all(bad not in title for bad in bad_titles):
                            outlinks_list.append(title)

    return outlinks_list

Get the outlinks.

raw_outlinks = get_page_outlinks(page_title)
redirected_outlinks = resolve_redirects(raw_outlinks)

Get the data.

This stage may take several minutes.

# Get the data
hl_pvs_df = get_multiple_pageviews(redirected_outlinks + [page_title],today_s)

# Show the top rows
hl_pvs_df.head()
Error on: Tongoa (page does not exist)
Error on: Tutukaka (page does not exist)
Error on: CASA/IPTN CN-235
Tropical cyclone Natural disaster Vanuatu Solomon Islands Tuvalu New Zealand Atmospheric pressure Cyclone Zoe 2002–03 South Pacific cyclone season Cyclone Gafilo 2003–04 South-West Indian Ocean cyclone season Maximum sustained wind South Pacific tropical cyclone Cyclone Orson Cyclone Monica Cyclone Fantala Tropical cyclone scales Saffir–Simpson scale Bar (unit) Pascal (unit) Inch of mercury Extratropical cyclone Storm surge State of emergency Santa Cruz Islands Efate Ni-Vanuatu Port Vila Tafea Province Erromango Tanna (island) Water scarcity North Island Fiji Meteorological Service Nadi Numerical weather prediction Rainband Ridge (meteorology) Atmospheric circulation Central dense overcast Microwave Eye (cyclone) Light Penama Province MetService Wellington International Federation of Red Cross and Red Crescent Societies 1986–87 South Pacific cyclone season UNICEF United Nations Office for the Coordination of Humanitarian Affairs ... Hicks Bay Gisborne, New Zealand Whangarei District Tutukaka (page does not exist) Tolaga Bay Chatham Islands Lockheed P-3 Orion Jim Yong Kim World Bank United Nations Secretary-General of the United Nations Ban Ki-moon Climate change World Conference on Disaster Risk Reduction Sendai President of Vanuatu Baldwin Lonsdale Bauerfield International Airport United Kingdom European Union India France–New Zealand relations French frigate Vendémiaire Nouméa Royal Australian Air Force Boeing C-17 Globemaster III in Australian service Lockheed C-130 Hercules in Australian service CASA/IPTN CN-235 New Caledonian Armed Forces Save the Children Typhoon Haiyan Moso (island) IsraAid Shepherd Islands Adventist Development and Relief Agency Swiss franc Taiwan List of the most intense tropical cyclones Cyclone Percy 2004–05 South Pacific cyclone season Cyclone Ron Cyclone Susan 1997–98 South Pacific cyclone season 2010–11 South Pacific cyclone season Cyclone Fran 1991–92 South Pacific cyclone season Cyclone Winston 2015–16 South Pacific cyclone season Hurricane preparedness for New Orleans Cyclone Pam
2015-07-01 1501.0 5038.0 2268.0 1950.0 1727.0 10411.0 3017.0 36.0 15.0 27.0 6.0 38.0 81.0 11.0 14.0 NaN 229.0 NaN 1518.0 2787.0 204.0 244.0 235.0 544.0 60.0 49.0 63.0 170.0 7.0 25.0 143.0 749.0 333.0 27.0 50.0 173.0 27.0 45.0 220.0 14.0 1751.0 258.0 2640.0 12.0 26.0 1480.0 149.0 11.0 2332.0 71.0 ... 9.0 162.0 15.0 NaN 23.0 337.0 984.0 682.0 5347.0 9006.0 1109.0 2804.0 3730.0 26.0 362.0 25.0 45.0 47.0 24031.0 16551.0 34550.0 23.0 NaN 370.0 1010.0 52.0 128.0 NaN 3.0 458.0 902.0 1.0 27.0 4.0 60.0 1045.0 9628.0 190.0 10.0 5.0 7.0 6.0 38.0 7.0 13.0 15.0 NaN 137.0 39.0 267.0
2015-07-02 1576.0 4704.0 2189.0 1788.0 1753.0 10459.0 3081.0 40.0 12.0 24.0 1.0 35.0 66.0 21.0 27.0 NaN 296.0 NaN 1612.0 2789.0 254.0 261.0 292.0 447.0 113.0 69.0 87.0 283.0 26.0 19.0 110.0 695.0 314.0 50.0 49.0 188.0 41.0 36.0 227.0 14.0 1790.0 222.0 2665.0 25.0 14.0 1516.0 145.0 4.0 2385.0 80.0 ... 8.0 155.0 6.0 NaN 20.0 343.0 1036.0 652.0 4388.0 8686.0 1097.0 2766.0 3689.0 23.0 338.0 38.0 44.0 60.0 24806.0 14013.0 34197.0 14.0 NaN 383.0 806.0 49.0 110.0 NaN 2.0 482.0 1073.0 14.0 24.0 5.0 45.0 1146.0 9663.0 222.0 11.0 7.0 12.0 5.0 6.0 6.0 6.0 16.0 NaN 136.0 33.0 271.0
2015-07-03 1236.0 3728.0 2029.0 1628.0 1661.0 9704.0 2257.0 19.0 7.0 16.0 2.0 29.0 73.0 13.0 9.0 NaN 203.0 NaN 1248.0 2202.0 177.0 191.0 232.0 417.0 78.0 48.0 76.0 267.0 13.0 19.0 108.0 653.0 311.0 23.0 62.0 199.0 54.0 43.0 175.0 18.0 1372.0 223.0 2319.0 11.0 11.0 1458.0 119.0 8.0 2170.0 77.0 ... 10.0 152.0 10.0 NaN 22.0 323.0 740.0 463.0 3731.0 7883.0 886.0 2620.0 2838.0 27.0 337.0 29.0 41.0 63.0 23958.0 12084.0 37093.0 19.0 NaN 311.0 788.0 42.0 54.0 NaN 4.0 337.0 902.0 4.0 18.0 4.0 49.0 929.0 8487.0 186.0 9.0 3.0 6.0 10.0 15.0 4.0 6.0 10.0 NaN 513.0 33.0 164.0
2015-07-04 1166.0 3402.0 1735.0 1657.0 1592.0 9847.0 1992.0 18.0 1.0 24.0 1.0 42.0 32.0 12.0 23.0 NaN 201.0 NaN 882.0 1487.0 147.0 188.0 167.0 1748.0 54.0 52.0 66.0 178.0 14.0 31.0 100.0 526.0 262.0 35.0 63.0 130.0 34.0 36.0 123.0 11.0 1210.0 197.0 2021.0 21.0 7.0 1636.0 112.0 4.0 1963.0 38.0 ... 5.0 150.0 9.0 NaN 16.0 338.0 727.0 427.0 3282.0 7567.0 1499.0 3462.0 2508.0 27.0 317.0 18.0 53.0 50.0 22813.0 11045.0 31526.0 24.0 NaN 333.0 670.0 22.0 38.0 NaN 1.0 294.0 765.0 6.0 20.0 5.0 84.0 1076.0 8767.0 202.0 13.0 5.0 6.0 6.0 7.0 9.0 14.0 8.0 NaN 307.0 35.0 159.0
2015-07-05 1343.0 4897.0 1952.0 1728.0 1802.0 11474.0 2311.0 29.0 3.0 20.0 2.0 31.0 45.0 12.0 10.0 NaN 271.0 1.0 897.0 1641.0 121.0 236.0 307.0 968.0 43.0 50.0 74.0 209.0 12.0 40.0 211.0 773.0 307.0 11.0 73.0 151.0 26.0 25.0 189.0 11.0 1408.0 281.0 2321.0 13.0 12.0 1671.0 110.0 4.0 2201.0 75.0 ... 4.0 150.0 19.0 NaN 28.0 345.0 832.0 472.0 3691.0 8267.0 1091.0 2674.0 3104.0 20.0 338.0 23.0 41.0 57.0 23968.0 17501.0 33095.0 17.0 NaN 372.0 727.0 45.0 44.0 NaN 15.0 350.0 1035.0 8.0 16.0 8.0 52.0 882.0 8878.0 205.0 12.0 8.0 7.0 5.0 15.0 4.0 5.0 8.0 NaN 151.0 47.0 135.0

5 rows × 131 columns

What are the most-viewed articles in the hyperlink network?

most_viewed_articles = hl_pvs_df.cumsum().ix[str(yesterday.date())]
most_viewed_articles = most_viewed_articles.sort_values(ascending=False)
most_viewed_articles.head(10)
India                   16605630.0
United Kingdom          13671336.0
European Union           8634262.0
New Zealand              6767775.0
Taiwan                   5183230.0
United Nations           5002039.0
Fiji                     2875236.0
Atmospheric pressure     2522997.0
Climate change           1921545.0
Pascal (unit)            1770579.0
Name: 2016-10-30 00:00:00, dtype: float64

Most and least correlated articles

Which articles are most correlated with each other?

# Log the pageview data to reduce skew from bursty outliers abd make the correlation table
hl_corr_df = hl_pvs_df.apply(np.log).corr()

# Correlation table is symmetric, drop one half of them
# From: http://stackoverflow.com/questions/34417685/melt-the-upper-triangular-matrix-of-a-pandas-dataframe
hl_corr_df = hl_corr_df.where(np.triu(np.ones(hl_corr_df.shape)).astype(np.bool))

# Unstack the DataFrame into a series and sort
hl_corr_s = hl_corr_df.unstack().sort_values(ascending=False)

# Drop NaNs
hl_corr_s = hl_corr_s.dropna()

# Drop values equal to 1
hl_corr_s = hl_corr_s[hl_corr_s < 1]

List out the 10 most correlated articles.

hl_corr_s.head(10)
Microwave                                   Pascal (unit)       0.902712
Eye (cyclone)                               Tropical cyclone    0.895402
                                            Storm surge         0.878683
Pascal (unit)                               Bar (unit)          0.868820
Storm surge                                 Tropical cyclone    0.867929
Atmospheric circulation                     Pascal (unit)       0.851969
European Union                              United Kingdom      0.847049
List of the most intense tropical cyclones  Eye (cyclone)       0.827625
                                            Tropical cyclone    0.819027
Inch of mercury                             Bar (unit)          0.816683
dtype: float64

Inspect this correlation from the raw data.

_df = hl_pvs_df[list(hl_corr_s.index[0])]

ax = _df.plot(logy=True)

Look at the 10 least-correlated articles.

hl_corr_s.tail(10)
French frigate Vendémiaire            International Federation of Red Cross and Red Crescent Societies   -0.238867
                                      India                                                              -0.243772
Saffir–Simpson scale                  New Zealand                                                        -0.252657
Wellington                            Saffir–Simpson scale                                               -0.253067
Kiribati                              Cyclone Fantala                                                    -0.254038
1991–92 South Pacific cyclone season  Water scarcity                                                     -0.262684
Cyclone Winston                       World Conference on Disaster Risk Reduction                        -0.283681
Lifou                                 Saffir–Simpson scale                                               -0.295198
French frigate Vendémiaire            World Conference on Disaster Risk Reduction                        -0.304866
Guadalcanal                           Saffir–Simpson scale                                               -0.315300
dtype: float64

Plot the correlation between the two most anti-correlated articles. These show some kinda wacky properties that are interesting to explore or think more about.

_df = hl_pvs_df[list(hl_corr_s.index[-1])]

ax = _df.plot(logy=True)

Is there a relationship between the position of the link on the page and the correlation between the linked article's pageviews and the seed article's pageviews? For instance, links closer to the top of the page might reflect more important topics while links towards the end of the page might be less relevant.

link_corrs = []

for num,link in enumerate(redirected_outlinks):
    try:
        link_corrs.append({'position':num,'title':link,'corr':hl_corr_s.ix[(page_title,link)]})
    except KeyError:
        print("Error on: {0}".format(link))
Error on: Tongoa (page does not exist)
Error on: Tutukaka (page does not exist)
Error on: CASA/IPTN CN-235

Plot the results.

ax = pd.DataFrame(link_corrs).plot.scatter(x='position',y='corr')
ax.set_xlim((0,len(link_corrs)))
ax.set_ylim((-1,1))
ax.set_xlabel('Link position')
ax.set_ylabel('Correlation')
<matplotlib.text.Text at 0x7f9366486fd0>

Get page revisions

In this section, we'll repurpose and adapt code from the last lab to get data about page revisions. Rather than looking at the number of times a user contributed to a given article, we'll simply count the number of times the article was edited on a given date.

def get_page_edits_by_date(page_title,conn,date_string='2014-12-31'):
    """ Takes a page title and returns the number of revisions made on each date.
      page_title = a string for the page title to get its revisions
      date_string = a string for the date in YYYY-MM-DD format
      conn = a database connection
      
    Returns:
      A DataFrame with username, page title, edit count, and min/max timestamps
    """
    # In case you pass a page title with spaces in it, replace the spaces with underscores
    page_title = page_title.replace(' ','_').encode('utf8').decode('latin1')
    
    # The MySQL query string used to retrieve the data. By line, it is
    ## converting the timestamp to a date and 
    ## counting the number of elements
    ## from the "revisions" table
    ## joining the "page" table on it
    ## using the page_id and rev_page columns as keys
    ## limiting the results to entries that have the pagetitle, 
    ## occur in the namespace, and happen after Dec 31, 2014
    ## grouping the results by date
    s = """
            SELECT
                DATE(rev_timestamp) as date,
                page_title,
                COUNT(*) as edits
            FROM 
                revision 
            JOIN 
                page ON page.page_id = revision.rev_page
            WHERE 
                page.page_title = "{0}" 
                AND page_namespace = 0
                AND DATE(rev_timestamp) > '{1}'
            GROUP BY
                date
        """.format(page_title,date_string)

    # Use the connection to run the query and return the results as a DataFrame
    _df = pd.read_sql_query(s,conn)
    
    _df['page_title'] = _df['page_title'].str.decode('utf8')
    _df['page_title'] = _df['page_title'].str.replace('_',' ')
    
    # Return the data, with a clean index
    return _df

def get_neighbors_revisions(page_title,conn):
    """ Takes a page title and returns revisions for the page and its neighbors.
      page_title = a string for the page title to get its revisions
      
    Returns:
      A pandas DataFrame containing all the page revisions.
    """
    # Get the outlinks from the page and include the page itself in the list
    alters = get_page_outlinks(page_title) + [page_title]
    # Resolve the redirects in the list of alters
    alters = list(set(resolve_redirects(alters)))
    # Create an empty container to hold the DataFrames
    df_list = []
    # For each page, get the revision counts and append to the df_list
    for alter in alters:
        _df = get_page_edits_by_date(alter,conn)
        df_list.append(_df)
    # Concatenate the list of revision count DataFrames into a giant DataFrame
    df = pd.concat(df_list)
    # Return the data
    return df.reset_index(drop=True)

Get the authentication information and connect to the database.

host, user, password = os.environ['MYSQL_HOST'], os.environ['MYSQL_USERNAME'], os.environ['MYSQL_PASSWORD']
conn = pymysql.connect(host=host,user=user,password=password,database='enwiki_p',connect_timeout=3600)
conn.cursor().execute('use enwiki_p');

Get the number of revisions per day for all the articles.

hl_daily_rev_df = get_neighbors_revisions(page_title,conn)
hl_daily_rev_df.head()
date page_title edits
0 2015-01-05 Solomon Islands 2.0
1 2015-01-18 Solomon Islands 1.0
2 2015-01-27 Solomon Islands 1.0
3 2015-01-29 Solomon Islands 1.0
4 2015-02-08 Solomon Islands 1.0

Reindex the edit data so it's starting and ending on the same dates as the pageviews data.

# Convert into a format like the hl_pageviews DataFrame
# Index are dates between Jan 1, 2015 and today; columns are article titles; values are number of edits
hl_edits_df = hl_daily_rev_df.set_index(['date','page_title'])['edits'].unstack(1)

# Reindex so dates are continuous
pv_start_ix = str(hl_pvs_df.index.min().date())
_date_range = pd.date_range(pv_start_ix,yesterday_date_s)
hl_edits_df = hl_edits_df.reindex(index=_date_range)

# Fill in empty observations with 0s
hl_edits_df = hl_edits_df.fillna(0)

hl_edits_df.head()
page_title 1986–87 South Pacific cyclone season 1991–92 South Pacific cyclone season 1997–98 South Pacific cyclone season 2002–03 South Pacific cyclone season 2003–04 South-West Indian Ocean cyclone season 2004–05 South Pacific cyclone season 2010–11 South Pacific cyclone season 2015–16 South Pacific cyclone season Adventist Development and Relief Agency Anuta Atmospheric circulation Atmospheric pressure Atoll Baldwin Lonsdale Ban Ki-moon Bar (unit) Bauerfield International Airport Boeing C-17 Globemaster III in Australian service CASA/IPTN CN-235 Central dense overcast Chatham Islands Climate change Cyclone Bola Cyclone Fantala Cyclone Fran Cyclone Gafilo Cyclone Monica Cyclone Orson Cyclone Pam Cyclone Percy Cyclone Ron Cyclone Susan Cyclone Winston Cyclone Zoe Efate Enele Sopoaga Erromango European Union Extratropical cyclone Eye (cyclone) Fiji Fiji Meteorological Service France–New Zealand relations French frigate Vendémiaire Gisborne, New Zealand Guadalcanal Hicks Bay Hurricane preparedness for New Orleans Inch of mercury India ... North Island Northern Division, Fiji Nouméa Nui (atoll) Nukufetau Nukulaelae Numerical weather prediction Pascal (unit) Penama Province Port Vila President of Vanuatu Prime Minister of Tuvalu Rainband Ridge (meteorology) Royal Australian Air Force Saffir–Simpson scale Santa Cruz Islands Save the Children Secretary-General of the United Nations Sendai Shepherd Islands Solomon Islands South Pacific tropical cyclone State of emergency Storm surge Swiss franc Tafea Province Taiwan Tanna (island) Temotu Province Tikopia Tolaga Bay Tropical cyclone Tropical cyclone scales Tuvalu Typhoon Haiyan UNESCO UNICEF United Kingdom United Nations United Nations Office for the Coordination of Humanitarian Affairs Vaitupu Vanuatu Volvo Ocean Race Water scarcity Wellington Whangarei District World Bank World Conference on Disaster Risk Reduction Yasawa Islands
2015-07-01 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 6.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 7.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2015-07-02 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 4.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 1.0 1.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2015-07-03 0.0 0.0 0.0 0.0 0.0 0.0 0.0 23.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0
2015-07-04 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 1.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.0 0.0 0.0 0.0 4.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.0 0.0 0.0
2015-07-05 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0

5 rows × 129 columns

Are pageviews and edits correlated with each other?

_s1 = hl_pvs_df[page_title]
_s2 = hl_edits_df[page_title]

np.corrcoef(_s1.apply(np.log),_s2)[0][1]
0.21487166945773523
single_pv_edits_df = pd.DataFrame({'pageviews':_s1,'edits':_s2})
ax = single_pv_edits_df.plot(secondary_y='edits',logy=True)
ax.right_ax.set_yscale('log')
ax.set_ylabel('Pageviews')
ax.right_ax.set_ylabel('Edits')
<matplotlib.text.Text at 0x7f9365bab4a8>

Can Wikipedia supply information to keep up with demand?

The ratio between the cumulative pageviews and cumulative edits.

ax = (_s1.cumsum()/_s2.cumsum()).plot()

ax.set_ylabel('Cumulative pageviews per edit')
<matplotlib.text.Text at 0x7f9365a09978>
def zscore(series):
    return np.abs((series - series.mean())/series.std())

Look at the normalized (z-score) excitation and relaxation in edits and pageviews by day. Each point is a single day in the article's history and they're connected if they come one day after each other. Values along the diagonal in red suggest that increases in attention to the article are matched by similar increases in editing activity on the article. Alternatively, data points in the upper-left triangle suggest increases in pageviews are not matched by increases in edits while data points in the lower-right triangle suggest increases in edits are not matched by increases in pageviews.

f,ax = plt.subplots(1,1)

ax.set_xlabel('Edits (z-score)')
ax.set_ylabel('Pageviews (z-score)')
ax.set_xlim((1e-3,1e2))
ax.set_ylim((1e-3,1e2))
ax.set_xscale('log')
ax.set_yscale('log')

plt.text(1e-1,1e1,'More views than edits',ha='center',weight='bold')
plt.text(1e1,1e-1,'More edits than views',ha='center',weight='bold')

plt.plot([1e-3,1e2],[1e-3,1e2],axes=ax,c='r')

_s1 = zscore(hl_edits_df[page_title])
_s2 = zscore(hl_pvs_df[page_title])
plt.plot(_s1,_s2,'o-',axes=ax,c='grey');