Lab 2 - Hyperlink Networks

Professor Brian Keegan
Department of Information Science, CU Boulder
This notebook is copyright and made available under the Apache License v2.0 license.

This is the second of five lab notebooks that will explore how to do some introductory data extraction and analysis from Wikipedia data. This lab will extend the methods in the prior lab about analyzing a single article's revision histories and use network science methods to analyze the networks of hyperlinks around a single article. You do not need to be fluent in either to complete the lab, but there are many options for extending the analyses we do here by using more advanced queries and scripting methods.

Acknowledgements
I'd like to thank the Wikimedia Foundation for the PAWS system and related Wikitech infrastructure that this workbook runs within. Yuvi Panda, Aaron Halfaker, Jonathan Morgan, and Dario Taraborelli have all provided crucial support and feedback.

Confirm that basic Python commands work

a = 3
b = 4
a**b
81

Import modules and setup environment

Load up all the libraries we'll need to connect to the database, retreive information for analysis, and visualize results.

# Makes the plots appear within the notebook
%matplotlib inline

# Two fundamental packages for doing data manipulation
import numpy as np                   # http://www.numpy.org/
import pandas as pd                  # http://pandas.pydata.org/

# Two related packages for plotting data
import matplotlib.pyplot as plt      # http://matplotlib.org/
import seaborn as sb                 # https://stanford.edu/~mwaskom/software/seaborn/

# Package for requesting data via the web and parsing resulting JSON
import requests
import json
from bs4 import BeautifulSoup

# Two packages for accessing the MySQL server
import pymysql                       # http://pymysql.readthedocs.io/en/latest/
import os                            # https://docs.python.org/3.4/library/os.html

# Packages for analyzing complex networks
import networkx as nx                # https://networkx.github.io/
import igraph as ig

# Setup the code environment to use plots with a white background and DataFrames show more columns and rows
sb.set_style('whitegrid')
pd.options.display.max_columns = 100
pd.options.display.max_rows = 110

Define the name of the article you want to use for the rest of the lab.

page_title = 'Barack Obama'

Network 1 - Hyperlinks

The first part of this lab with examine the hyperlinks among Wikipedia articles.

Retrieve the content of the page via API

Write a function that takes an article title and returns the list of links in the body of the article. Note that the reason we don't use the "pagelinks" table in MySQL or the "links" parameter in the API is that this includes links within templates. Articles with templates link to each other forming over-dense clusters in the resulting networks. We only want the links appearing in the body of the text.

We pass a request to the API, which returns a JSON-formatted string containing the HTML of the page. We use BeautifulSoup to parse through the HTML tree and extract the non-template links and return them as a list.

def get_page_outlinks(page_title,redirects=1):
    # Replace spaces with underscores
    #page_title = page_title.replace(' ','_')
    
    bad_titles = ['Special:','Wikipedia:','Help:','Template:','Category:','International Standard','Portal:','s:','File:']
    
    # Get the response from the API for a query
    # After passing a page title, the API returns the HTML markup of the current article version within a JSON payload
    req = requests.get('https://en.wikipedia.org/w/api.php?action=parse&format=json&page={0}&redirects={1}&prop=text&disableeditsection=1&disabletoc=1'.format(page_title,redirects))
    
    # Read the response into JSON to parse and extract the HTML
    json_string = json.loads(req.text)
    
    # Initialize an empty list to store the links
    outlinks_list = [] 
    
    if 'parse' in json_string.keys():
        page_html = json_string['parse']['text']['*']

        # Parse the HTML into Beautiful Soup
        soup = BeautifulSoup(page_html,'lxml')

        # Delete tags associated with templates
        for tag in soup.find_all('tr'):
            tag.replace_with('')

        # For each paragraph tag, extract the titles within the links
        for para in soup.find_all('p'):
            for link in para.find_all('a'):
                if link.has_attr('title'):
                    title = link['title']
                    # Ignore links that aren't interesting
                    if all(bad not in title for bad in bad_titles):
                        outlinks_list.append(title)

        # For each unordered list, extract the titles within the child links
        for unordered_list in soup.find_all('ul'):
            for item in unordered_list.find_all('li'):
                for link in item.find_all('a'):
                    if link.has_attr('title'):
                        title = link['title']
                        # Ignore links that aren't interesting
                        if all(bad not in title for bad in bad_titles):
                            outlinks_list.append(title)

    return outlinks_list

Run an example article, showing the first 10 articles.

page_outlinks = get_page_outlinks(page_title)
page_outlinks[:10]
['American English',
 'Listen',
 'President of the United States',
 'African American',
 'Contiguous United States',
 'Honolulu',
 'Hawaii',
 'Columbia University',
 'Harvard Law School',
 'Harvard Law Review']

You could write a recursive function like recursively_get_hyperlink_network that would crawl the hyperlink network out to an arbitrary distance, but this is becomes exhorbitantly expensive at any depth greater than 1.

Here's an example function, but is not executable to prevent you from harming yourself. :)

def recursively_get_hyperlink_network(seed_page,depth): neighbors = {} if depth < 0: return neighbors neighbors[seed_page] = get_page_outlinks(seed_page) for neighbor in neighbors[seed_page]: neighbors[neighbor] = get_hyperlink_network(neighbor,depth-1) return neighbors

Instead, define a simple function to get the 1.5-step ego hyperlink network. The "ego" is the seed page you start from, the "alters" are the neighbors that the ego links out to. We also get the alters of the alters (2nd order alters), but only include these 2nd order connections if they link to 1st order alters. In other words, the 1.5-step ego hyperlink network are all the pages linked from the seed page and the connections among this set of articles.

def get_hyperlink_alters(seed_page):
    # Initialize an empty dictionary to act as an adjacency "list"
    neighbors = {}
    
    # Get all the alters for the seed page and store them in the adjacency dictionary
    neighbors[seed_page] = get_page_outlinks(seed_page,1)
    
    # For each of the alters, get their alters and store in the adjacency dictionary
    for neighbor in list(set(neighbors[seed_page])): # Don't recrawl duplicates
        neighbors[neighbor] = get_page_outlinks(neighbor,0)
    
    # Initialize an empty graph that we will add nodes and edges into
    g = nx.DiGraph()
    
    # For each entry in the adjacency dictionary, check if the alter's alters are also the seed page's alters
    # If they are and the edge is already in the graph, increment the edge weight by one
    # If they are but the edge is not already in the graph, add the edge with a weight of one
    for article,neighbor_list in neighbors.items():
        for neighbor in neighbor_list:
            if neighbor in neighbors[seed_page] + [seed_page]:
                if g.has_edge(article,neighbor):
                    g[article][neighbor]['weight'] += 1
                else:
                    g.add_edge(article,neighbor,weight=1)
    
    # Return the weighted graph
    return g

Run this on an example article and save the resulting graph object to disk.

This step could take more than a minute depending on the number of links and size of the neighboring pages.

# Create the hyperlink network
hyperlink_g = get_hyperlink_alters(page_title)

# Save the graph to disk to visualize in Gephi
nx.write_graphml(hyperlink_g,'hyperlink_{0}.graphml'.format(page_title.replace(' ','_')))
hg_nodes = hyperlink_g.number_of_nodes()
hg_edges = hyperlink_g.number_of_edges()

print("There are {0} nodes and {1} edges in the hyperlink network.".format(hg_nodes,hg_edges))
There are 498 nodes and 4985 edges in the hyperlink network.
hg_density = nx.density(hyperlink_g)
print('{0:.2%} of the possible edges actually exist.'.format(hg_density))
2.01% of the possible edges actually exist.
def reciprocity(g):
    reciprocated_edges = []
    
    for (i,j) in g.edges():
        if hyperlink_g.has_edge(j,i):
            reciprocated_edges.append((i,j))
    
    return len(reciprocated_edges)/float(g.number_of_edges())

hg_reciprocity = reciprocity(hyperlink_g)

print('{0:.2%} of the edges in the hyperlink network are reciprocated.'.format(hg_reciprocity))
34.16% of the edges in the hyperlink network are reciprocated.

Play the Wikipedia Game!

Using only the hyperlinks on the article, try to get from the first article to the second article.

page1,page2 = np.random.choice(list(hyperlink_g.nodes()),2)
print("Try to navigate from \"{0}\" to \"{1}\" using only hyperlinks.\n".format(page1,page2))
print("Start at: https://en.wikipedia.org/wiki/{0}".format(page1.replace(' ','_')))
Try to navigate from "Barack Obama election victory speech, 2008" to "United States Congress" using only hyperlinks.

Start at: https://en.wikipedia.org/wiki/Barack_Obama_election_victory_speech,_2008

No cheating!

After you've played the game a few times, see what an optimal shortest path is. You may get an error indicating there is no shortest path, in which case, try a new pair of nodes.

nx.shortest_path(hyperlink_g,page1,page2)
['Barack Obama election victory speech, 2008',
 'Barack Obama',
 'United States Congress']

The shortest path length is the path connecting two nodes in the fewest steps. This is related to the "small world" effect where everyone in the world is just a few handshakes from each other. It's rare to find complex networks where the longest shortest path is above 5. Nodes that are this far from each other are likely about very unrelated topics.

If there are no paths greater than 5, lower the path_length_threshold from 5 to 4.

The long_path_lengths dictionary below is populated by computing all the shortest path lengths between nodes in the network and only keeping those paths that are longer than 5 steps from each other. In a directed graph like our hyperlink network, it's important to follow the direction of the arrows: if page A links to page B but page B doesn't link to page A, then we can't make a shortest path from B to A, we have to find another path.

path_length_threshold = 4
long_path_lengths = {}

for k,d in nx.all_pairs_shortest_path_length(hyperlink_g).items():
    long_paths = [v for v,l in d.items() if l > path_length_threshold]
    if len(long_paths) > 0:
        long_path_lengths[k] = long_paths
        
        
long_path_lengths.keys()
dict_keys(['Indoor tanning', 'Metropolitan Transportation Authority', 'National Association for Business Economics'])

The shortest path between the articles can be identified using the shortest_path function and supplying the graph and the names of two nodes.

# Randomly choose two articles in the list of long shortest paths
page1,page2 = np.random.choice(list(long_path_lengths.keys()),2)
print("The two pages randomly selected are: \"{0}\" and \"{1}\"".format(page1,page2))

# Display the path between these articles
nx.shortest_path(hyperlink_g,page1,page2)
The two pages randomly selected are: "Indoor tanning" and "Metropolitan Transportation Authority"
['Indoor tanning',
 'Parliament of the United Kingdom',
 'Tony Blair',
 'War in Afghanistan (2001–14)',
 'Barack Obama',
 'Metropolitan Transportation Authority']

Test out different combinations of articles from the long_path_lengths to find the articles that are farthest apart by entering different article names for page1 and page2.

page1 = 'National Association for Business Economics'
page2 = 'NATO'
nx.shortest_path(hyperlink_g,page1,page2)
['National Association for Business Economics',
 'Congressional Budget Office',
 'United States Congress',
 'Democratic Party (United States)',
 'NATO']
hg_in_degree_d = {node:int(centrality*(len(hyperlink_g) - 1)) for node,centrality in nx.in_degree_centrality(hyperlink_g).items()}
hg_out_degree_d = {node:int(centrality*(len(hyperlink_g) - 1)) for node,centrality in nx.out_degree_centrality(hyperlink_g).items()}

Look at the nodes with the highest in-degree: other pages in the network point to this page.

degree_df = pd.DataFrame({'In':hg_in_degree_d,'Out':hg_out_degree_d})
degree_df['In'].sort_values(ascending=False).head(10)
Barack Obama                              245
President of the United States            138
The New York Times                        111
George W. Bush                            101
United States Senate                       99
Democratic Party (United States)           94
Republican Party (United States)           83
Bill Clinton                               79
United States Congress                     78
United States House of Representatives     75
Name: In, dtype: int64

Look at the nodes with the highest-out-degree: these pages point to many other pages.

degree_df['Out'].sort_values(ascending=False).head(10)
Barack Obama                                        497
Joe Biden                                            64
Hillary Clinton                                      57
George W. Bush                                       52
Democratic Party (United States)                     45
United States presidential election, 2008            44
First inauguration of Barack Obama                   43
Bill Clinton                                         42
Barack Obama presidential primary campaign, 2008     41
John McCain                                          41
Name: Out, dtype: int64

Look at the nodes that have no links out.

degree_df.query('Out == 0')['Out']
Associate attorney                                   0
Calvert School                                       0
DMOZ                                                 0
Default (finance)                                    0
Drilling rig                                         0
Foreign student                                      0
Glamour (magazine)                                   0
Harris Interactive                                   0
Hopkins & Sutter                                     0
Inaugural address                                    0
Joint Political Military Group                       0
Listen                                               0
Marijuana                                            0
Money (magazine)                                     0
Office of the Vice President of the United States    0
Ovarian cancer                                       0
President of Cuba                                    0
Prisoner exchange                                    0
Resurrection of Jesus                                0
Tax bracket                                          0
Tax incentive                                        0
University of Nairobi                                0
University-preparatory school                        0
Uterine cancer                                       0
Name: Out, dtype: int64

Look at nodes that have a single link in. These are also known as (in-) pendants. If there are none, it should appear as an empty series.

degree_df.query('In == 1')['In']
137th Street – City College (IRT Broadway – Seventh Avenue Line)    1
2005 American League Championship Series                            1
2009 Major League Baseball All-Star Game                            1
2009 Nobel Peace Prize                                              1
Alice Palmer (politician)                                           1
Automotive industry crisis of 2008–10                               1
Bernie Mac                                                          1
Besuki Public School                                                1
Business International Corporation                                  1
Calvert School                                                      1
Car Allowance Rebate System                                         1
Chlorine gas                                                        1
David D. McKiernan                                                  1
Death of Nelson Mandela                                             1
Debt ceiling                                                        1
Deceptive Practices and Voter Intimidation Prevention Act           1
Deepwater drilling                                                  1
Democratic Party presidential primaries, 2012                       1
Energy policy of the United States                                  1
Foreign student                                                     1
France 24                                                           1
Ghouta chemical attack                                              1
Glamour (magazine)                                                  1
Hiroshima Peace Memorial Museum                                     1
Honest Leadership and Open Government Act                           1
Indiana Governor                                                    1
Indonesian Language                                                 1
Innovation economics                                                1
Iraq War De-Escalation Act of 2007                                  1
Iraqi insurgency (2011–13)                                          1
J-1 visa                                                            1
JAMA (journal)                                                      1
Joint Political Military Group                                      1
Kapiolani Medical Center for Women and Children                     1
LGBT American                                                       1
Matthew Shepard and James Byrd Jr. Hate Crimes Prevention Act       1
National Association of Black Journalists                           1
Negotiations leading to the Joint Comprehensive Plan of Action      1
Nunn–Lugar Cooperative Threat Reduction                             1
Nyang’oma Kogelo                                                    1
Ovarian cancer                                                      1
Paris Agreement                                                     1
Petroleum exploration in the Arctic                                 1
Prisoner exchange                                                   1
Project Vote                                                        1
Public-Private Investment Program for Legacy Assets                 1
Reactions to the death of Osama bin Laden                           1
Somerville, Massachusetts                                           1
Sponsor (legislative)                                               1
St. John's Episcopal Church, Lafayette Square                       1
Steeler Nation                                                      1
United States Senate Committee on Veterans' Affairs                 1
United States presidential visits to Sub-Saharan Africa             1
University of Nairobi                                               1
Virginia Governor                                                   1
Voter registration campaign                                         1
Westminster Hall                                                    1
Name: In, dtype: int64

Look at the nodes with a single link out. These are also known as (out-)pendants. If there are none, it should appear as an empty series.

degree_df.query('Out == 1')['Out']
Anthropology                                                                                       1
Audiobook                                                                                          1
Cairo University                                                                                   1
Christianity Today                                                                                 1
Congressional Budget Office                                                                        1
Congressional Quarterly                                                                            1
Constitutional law                                                                                 1
Debt ceiling                                                                                       1
Disinvestment from Iran                                                                            1
Earmark (politics)                                                                                 1
Education in Africa                                                                                1
Embryonic stem cell                                                                                1
Fiat                                                                                               1
Fisher House Foundation                                                                            1
Hartford Courant                                                                                   1
Hate crime laws in the United States                                                               1
Hispanic                                                                                           1
Indoor tanning                                                                                     1
Innovation economics                                                                               1
International Herald Tribune                                                                       1
JAMA (journal)                                                                                     1
Magna cum laude                                                                                    1
Medicare Advantage                                                                                 1
Metropolitan Transportation Authority                                                              1
National Association for Business Economics                                                        1
Norwegian Nobel Committee                                                                          1
Of counsel                                                                                         1
Paris Agreement                                                                                    1
Project Vote                                                                                       1
Public-Private Investment Program for Legacy Assets                                                1
Resignation from the United States Senate                                                          1
Senior Advisor to the President                                                                    1
Statute of limitations                                                                             1
Steeler Nation                                                                                     1
Stimulus (economics)                                                                               1
Tax credit                                                                                         1
Trickle-down economics                                                                             1
United States Senate Committee on Environment and Public Works                                     1
United States Senate Committee on Veterans' Affairs                                                1
United States Senate Foreign Relations Subcommittee on Europe and Regional Security Cooperation    1
Voter registration campaign                                                                        1
Wailuku, Hawaii                                                                                    1
Name: Out, dtype: int64

Given a page, what are the neighbors that link in to it? Assign a specific article title to the page1 variable by replacing the np.random.choice(degree_df.index)

page1 = np.random.choice(degree_df.index)

in_connections = hyperlink_g.predecessors(page1)
print("The links into node \"{0}\" are:\n{1}".format(page1,in_connections))
The links into node "Hopkins & Sutter" are:
['Woods Fund of Chicago', 'Barack Obama']
out_connections = hyperlink_g.successors(page1)
print("The links out from node \"{0}\" are:\n{1}".format(page1,out_connections))
The links out from node "Hopkins & Sutter" are:
[]
in_degree_dist_df = degree_df['In'].value_counts().reset_index()
out_degree_dist_df = degree_df['Out'].value_counts().reset_index()

f,ax = plt.subplots(1,1)
in_degree_dist_df.plot.scatter(x='index',y='In',ax=ax,c='blue',label='In')
out_degree_dist_df.plot.scatter(x='index',y='Out',ax=ax,c='red',label='Out')
ax.set_xscale('symlog')
ax.set_yscale('symlog')
ax.set_xlim((0,1e3))
ax.set_ylim((0,1e3))

ax.set_xlabel('Connections')
ax.set_ylabel('Count')
<matplotlib.text.Text at 0x7f8f80024710>

Calculate communities within the network

Define a function to compute node community memberships for multiple community detection algorithms within igraph. The output is a dictionary of dictionaries where the top-level key is the name of the algorithm and returns a second-level dictionary keyed by the the page name with values being the community membership value. Documentation and details about these algorithms can be found under the igraph graph-class documentation.

def comparative_community_detector(igraph):
    memberships = {}
    
    # Directed memberships
    memberships['betweenness'] = igraph.community_edge_betweenness().as_clustering().membership
    memberships['infomap'] = igraph.community_infomap().membership
    memberships['spinglass'] = igraph.community_spinglass().membership
    memberships['walktrap'] = igraph.community_walktrap().as_clustering().membership
    
    # Undirected memberships
    undirected = igraph.as_undirected()
    memberships['fastgreedy'] = undirected.community_fastgreedy().as_clustering().membership
    memberships['leading_eigenvector'] = undirected.community_leading_eigenvector().membership
    memberships['multilevel'] = undirected.community_multilevel().membership
    
    labelled_memberships = {}
    for label,membership in memberships.items():
        labelled_memberships[label] = dict(zip(igraph.vs['id'],membership))
        
    return labelled_memberships

Not included in the comparative_community_detector function are two additional community detection algorithms that are too intensive or are not working properly. They're documented below if you ever care to explore in the future.

# Uses up a ton of memory and crashes kernel immediately ig_hg_optimal_modularity = hyperlink_g.community_optimal_modularity().membership ig_hg_optimal_modularity_labels = dict(zip(ig_hg.vs['id'],ig_hg_optimal_modularity)) pd.Series(ig_hg_optimal_modularity_labels).value_counts().head(10) # Lumps everyone into a single community ig_hg_label_propagation = hyperlink_g.community_label_propagation(initial=range(ig_hg_d.vcount()),fixed=[False]*ig_hg_d.vcount()).membership ig_hg_label_propagation_labels = dict(zip(ig_hg_d.vs['id'],ig_hg_label_propagation)) pd.Series(ig_hg_label_propagation_labels).value_counts().head(10)

Here we need to shift from using the networkx library to using the igraph library. The former is built purely in Python which makes it easier-to-use but somewhat slower while the latter is a "wrapper" that lets us write in Python but does the calculations in much-faster C code behind-the-scenes.

# Load the hyperlink network data from disk into a networkx graph object
nx_hg = nx.read_graphml('hyperlink_{0}.graphml'.format(page_title.replace(' ','_')))

# Load the hyperlink network data from disk into a igraph graph object
ig_hg = ig.read('hyperlink_{0}.graphml'.format(page_title.replace(' ','_')))
ig.summary(ig_hg) # Get statistics about the 
IGRAPH D-W- 498 4985 -- 
+ attr: id (v), weight (e)

Run the function on the igraph version of the hyperlink network.

This may take a minute or more since these are intensive calculations

# Run the community detection labelling on the igraph graph object
comparative_community_labels = comparative_community_detector(ig_hg)

# Convert the node labels into a dict-of-dicts keyed by page name and inner-dict containing community labels
comparative_community_labels_transposed = pd.DataFrame(comparative_community_labels).to_dict('index')

# Update each node in the networkx graph object to reflect the community membership labels
for _node in nx_hg.nodes():
    try:
        nx_hg.node[_node]['label'] = _node
        for (label,membership) in comparative_community_labels_transposed[_node].items():
            nx_hg.node[_node][label] = int(membership)
    except KeyError: # Concerning that some labels aren't present, but skip them for now
        print("Error in assigning \"{0}\" to a community.".format(_node))
        pass

# Write the labeled graph back to disk to visualize in Gephi
nx.write_graphml(nx_hg,'hyperlink_communities_{0}.graphml'.format(page_title.replace(' ','_')))
Error in assigning "Hopkins & Sutter" to a community.