Lab 3 - Collaboration Networks

Professor Brian Keegan
Department of Information Science, CU Boulder
This notebook is copyright and made available under the Apache License v2.0 license.

This is the second of five lab notebooks that will explore how to do some introductory data extraction and analysis from Wikipedia data. This lab will extend the methods in the prior lab about analyzing a single article's revision histories and use network science methods to analyze the networks of coauthorship and hyperlinks. You do not need to be fluent in either to complete the lab, but there are many options for extending the analyses we do here by using more advanced queries and scripting methods.

Acknowledgements
I'd like to thank the Wikimedia Foundation for the PAWS system and related Wikitech infrastructure that this workbook runs within. Yuvi Panda, Aaron Halfaker, Jonathan Morgan, and Dario Taraborelli have all provided crucial support and feedback.

Confirm that basic Python commands work

a = 3
b = 4
c = 5
(c-a)**b
16

Import modules and setup environment

Load up all the libraries we'll need to connect to the database, retreive information for analysis, and visualize results.

# Makes the plots appear within the notebook
%matplotlib inline

# Two fundamental packages for doing data manipulation
import numpy as np                   # http://www.numpy.org/
import pandas as pd                  # http://pandas.pydata.org/

# Two related packages for plotting data
import matplotlib.pyplot as plt      # http://matplotlib.org/
import seaborn as sb                 # https://stanford.edu/~mwaskom/software/seaborn/

# Package for requesting data via the web and parsing resulting JSON
import requests
import json
from bs4 import BeautifulSoup

# Two packages for accessing the MySQL server
import pymysql                       # http://pymysql.readthedocs.io/en/latest/
import os                            # https://docs.python.org/3.4/library/os.html

# Packages for analyzing complex networks
import networkx as nx                # https://networkx.github.io/
import igraph as ig

# Setup the code environment to use plots with a white background and DataFrames show more columns and rows
sb.set_style('whitegrid')
pd.options.display.max_columns = 100
pd.options.display.max_rows = 110
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-2-9a14d38e1bb8> in <module>()
     21 # Packages for analyzing complex networks
     22 import networkx as nx                # https://networkx.github.io/
---> 23 import igraph as ig
     24 
     25 # Setup the code environment to use plots with a white background and DataFrames show more columns and rows

ImportError: No module named 'igraph'

Define the name of the article you want to use for the rest of the lab.

page_title = 'John McCain'

Define helper functions

Start off by using two functions from Lab 2.

Retrieve the content of the page via API

def get_page_outlinks(page_title,redirects=1):
    # Replace spaces with underscores
    #page_title = page_title.replace(' ','_')
    
    bad_titles = ['Special:','Wikipedia:','Help:','Template:','Category:','International Standard','Portal:','s:']
    
    # Get the response from the API for a query
    # After passing a page title, the API returns the HTML markup of the current article version within a JSON payload
    req = requests.get('https://en.wikipedia.org/w/api.php?action=parse&format=json&page={0}&redirects={1}&prop=text&disableeditsection=1&disabletoc=1'.format(page_title,redirects))
    
    # Read the response into JSON to parse and extract the HTML
    json_string = json.loads(req.text)
    
    # Initialize an empty list to store the links
    outlinks_list = [] 
    
    if 'parse' in json_string.keys():
        page_html = json_string['parse']['text']['*']

        # Parse the HTML into Beautiful Soup
        soup = BeautifulSoup(page_html,'lxml')

        # Delete tags associated with templates
        for tag in soup.find_all('tr'):
            tag.replace_with('')

        # For each paragraph tag, extract the titles within the links
        for para in soup.find_all('p'):
            for link in para.find_all('a'):
                if link.has_attr('title'):
                    title = link['title']
                    # Ignore links that aren't interesting
                    if all(bad not in title for bad in bad_titles):
                        outlinks_list.append(title)

        # For each unordered list, extract the titles within the child links
        for unordered_list in soup.find_all('ul'):
            for item in unordered_list.find_all('li'):
                for link in item.find_all('a'):
                    if link.has_attr('title'):
                        title = link['title']
                        # Ignore links that aren't interesting
                        if all(bad not in title for bad in bad_titles):
                            outlinks_list.append(title)

    return outlinks_list

Calculate communities within the network

def comparative_community_detector(igraph):
    memberships = {}
    
    # Directed memberships
    memberships['betweenness'] = igraph.community_edge_betweenness().as_clustering().membership
    memberships['infomap'] = igraph.community_infomap().membership
    memberships['spinglass'] = igraph.community_spinglass().membership
    memberships['walktrap'] = igraph.community_walktrap().as_clustering().membership
    
    # Undirected memberships
    undirected = igraph.as_undirected()
    memberships['fastgreedy'] = undirected.community_fastgreedy().as_clustering().membership
    memberships['leading_eigenvector'] = undirected.community_leading_eigenvector().membership
    memberships['multilevel'] = undirected.community_multilevel().membership
    
    labelled_memberships = {}
    for label,membership in memberships.items():
        labelled_memberships[label] = dict(zip(igraph.vs['id'],membership))
        
    return labelled_memberships

Data retrieval

Retrieve the host name, username, and (encrypted) password associated with your account so you can login to the database.

host, user, password = os.environ['MYSQL_HOST'], os.environ['MYSQL_USERNAME'], os.environ['MYSQL_PASSWORD']

Connect to the system using the creditials from above, then select the database for the English Wikipedia.

conn = pymysql.connect(host=host,user=user,password=password,database='enwiki_p',connect_timeout=3600)
conn.cursor().execute('use enwiki_p');

Define functions to retrieve data from the MySQL database

Define a function get_user_revision_counts to return the number of times and the first/last edits a user made to a single article.

def get_user_revision_counts(page_title,conn):
    """ Takes a page title and returns the number of revisions each user made.
      page_title = a string for the page title to get its revisions
      conn = a database connection
      
    Returns:
      A DataFrame with username, page title, edit count, and min/max timestamps
    """
    # In case you pass a page title with spaces in it, replace the spaces with underscores
    page_title = page_title.replace(' ','_').encode('utf8').decode('latin1')
    
    # The MySQL query string used to retrieve the data. By line, it is
    ## selecting username, page title, the count of edits, and min/max timestamp 
    ## from the "revisions" table
    ## joining the "page" table on it
    ## using the page_id and rev_page columns as keys
    ## limiting the results to entries that have the pagetitle, occur in the namespace, and have >1 edits
    ## grouping the results by username
    ## and subsetting the groupby to users making more than 1 edit
    s = """
            SELECT 
                rev_user_text as user,
                page.page_title as page,
                COUNT(rev_user_text) as edits,
                MIN(rev_timestamp) as min_timestamp,
                MAX(rev_timestamp) as max_timestamp
            FROM 
                revision 
            JOIN 
                page ON page.page_id = revision.rev_page
            WHERE 
                page.page_title = "{0}" 
                AND page_namespace = 0
            GROUP BY
                rev_user_text
            HAVING
                edits > 1
        """.format(page_title)

    # Use the connection to run the query and return the results as a DataFrame
    _df = pd.read_sql_query(s,conn)
    
    # Some of the results have a "bytestring" format
    byte_columns = ['user','page','min_timestamp','max_timestamp']
    
    # For each column, convert it from bytestring to a utf8 string
    for col in byte_columns:
        _df[col] = _df[col].str.decode('utf8')
        
    # Several of the columns are timestamps. Convert to datetimes
    try:
        _df['min_timestamp'] = _df['min_timestamp'].apply(lambda x:pd.datetime.strptime(x,'%Y%m%d%H%M%S'))
        _df['max_timestamp'] = _df['max_timestamp'].apply(lambda x:pd.datetime.strptime(x,'%Y%m%d%H%M%S'))
        _df['min_timestamp'] = _df['min_timestamp'].apply(lambda x:round((x-pd.Timestamp('2001-01-01'))/np.timedelta64(1,'D'),0))
        _df['max_timestamp'] = _df['max_timestamp'].apply(lambda x:round((x-pd.Timestamp('2001-01-01'))/np.timedelta64(1,'D'),0))
    except:
        pass
    
    # Return the data, with a clean index
    return _df

# Also define a cute little function that will return the sizes of the network
def bipartite_network_size(edgelist_df,user_col='user',page_col='page'):
    # Print out summary statistics
    users = len(collab_nobot_edgelist_df[user_col].unique())
    pages = len(collab_nobot_edgelist_df[page_col].unique())
    edges = len(edgelist_df)
    return users, pages, edges

Check to see the results of how this function works.

# Awaken the data connection
conn.ping()
conn.cursor().execute('use enwiki_p')

# Get the revisions
single_page_user_revision_counts = get_user_revision_counts(page_title,conn)

# Look at the first few rows
single_page_user_revision_counts.head()
user page edits min_timestamp max_timestamp
0 07longc John_McCain 3 2220.0 2220.0
1 0x6D667061 John_McCain 2 1408.0 1417.0
2 12.147.213.28 John_McCain 3 2165.0 2308.0
3 12.147.59.132 John_McCain 3 2433.0 2433.0
4 12.150.68.150 John_McCain 3 1985.0 1985.0

Write a function get_neighbors_all_revisions to take a single page, get all its outlinks, and for each one of these outlinks get the user revision counts. This will return a large DataFrame containing all the users who ever edited the article and the number of times.

def get_neighbors_all_revisions(page_title,conn):
    """ Takes a page title and returns revisions for the page and its neighbors.
      page_title = a string for the page title to get its revisions
      
    Returns:
      A pandas DataFrame containing all the page revisions.
    """
    
    alters = get_page_outlinks(page_title) + [page_title]
    
    df_list = []
    
    for alter in alters:
        _df = get_user_revision_counts(alter,conn)
        df_list.append(_df)
        
    df = pd.concat(df_list)
    return df.reset_index(drop=True)

Now run the get_neighbors_all_revisions function to not only get all the page outlinks, but also all the revisions for every page in the dataset.

This step may take more than a minute, depending on the number of articles and number of revisions made to them.

# Awaken the data connection
conn.ping()
conn.cursor().execute('use enwiki_p')

# Get the data
all_rev_count_df = get_neighbors_all_revisions(page_title,conn)

# Create a list of the unique users and pages for reference later
collab_interactions = len(all_rev_count_df)
collab_users = all_rev_count_df['user'].unique()
collab_pages = all_rev_count_df['page'].unique()
print("There are {0:,} unique user-page interactions between {1:,} unique users and {2:,} pages.".format(collab_interactions,len(collab_users),len(collab_pages)))

# Show the bottom 5 rows of data.
all_rev_count_df.tail()
There are 142,247 unique user-page interactions between 75,042 unique users and 385 pages.
user page edits min_timestamp max_timestamp
142242 Youngamerican John_McCain 3.0 2598.0 2598.0
142243 ZimZalaBim John_McCain 13.0 2074.0 2144.0
142244 Zorrobot John_McCain 2.0 2759.0 2865.0
142245 Zsero John_McCain 29.0 2619.0 2874.0
142246 Ødipus sic John_McCain 2.0 2802.0 2806.0

Convert edgelist data into a network

We can convert this edgelist into a networkx bipartite graph by doing several steps. First we store the unique names of all the users and pages as references for subsequent analysis. Then we use the from_pandas_dataframe function to take the "collab_g_edgelist_df" and turn it into a directed graph (order of the connections matters) with the users as a source and the page titles as a target. The nodes in the resulting graph should then be labelled as being users or pages so that we can compare them more clearly in the visualization.

However the amount of data in the network exceeds the available memory we have in the PAWS environment (~1 GB). The code for constructing the whole network graph is below, but not in an executable form so as to prevent a kernel crash and you losing your progress so far.

# Convert the edgelist_df into a networkx graph, # preseving source, target, and weights collab_g = nx.from_pandas_dataframe(all_rev_count_df,source='user',target='page', edge_attr=['edits','min_timestamp','max_timestamp'], create_using=nx.DiGraph()) # Label the nodes in the network as being users or pages using the lists from above collab_g.add_nodes_from(collab_users,nodetype='user') collab_g.add_nodes_from(collab_pages,nodetype='page') # Write the graph out to file so we can visualize it in Gephi nx.write_gexf(collab_g,'collaboration_{0}.gexf'.format(page_title.replace(' ','_')))

Try an alternate version that excludes users having the name 'bot' in the title and users who made only one revision. You may get a SettingWithCopyWarning that pops up as a red box, but it should still have worked succesfully if the number of users, pages, and edges prints out at the bottom.

# Identify (and negate with the starting ~) rows where the user name contains the word "bot"
condition1 = ~all_rev_count_df['user'].str.lower().str.contains('bot').fillna(False)

# Identify rows where the user made more than 1 edits
condition2 = all_rev_count_df['edits'] > 1

# Subset the DataFrame to exclude rows statisfying the intersection of all the conditions above: All two must be true
collab_nobot_edgelist_df = all_rev_count_df[condition1 & condition2]

# Make sure users stay users and pages stay pages in case a user has the same name as a page by prepending a character
collab_nobot_edgelist_df['sure_user'] = collab_nobot_edgelist_df['user'].fillna('0').apply(lambda x: 'u:'+x)
collab_nobot_edgelist_df['sure_page'] = collab_nobot_edgelist_df['page'].fillna('0').apply(lambda x: 'p:'+x)

# Make a unique list of users and pages again
collab_nobot_users = collab_nobot_edgelist_df['sure_user'].unique()
collab_nobot_pages = collab_nobot_edgelist_df['sure_page'].unique()

# Create the graph
collab_nobot_g = nx.DiGraph()

# Create a graph without bots
collab_nobot_g = nx.from_pandas_dataframe(collab_nobot_edgelist_df,source='sure_user',target='sure_page',
                                    edge_attr=['edits','min_timestamp','max_timestamp'],create_using=collab_nobot_g)

collab_nobot_g.add_nodes_from(collab_nobot_users,nodetype='user')
collab_nobot_g.add_nodes_from(collab_nobot_pages,nodetype='page')

# Print out summary statistics using our cute little function from above
_users, _pages, _edges = bipartite_network_size(collab_nobot_edgelist_df)
print("There are {0:,} users, {1:,} pages, and {2:,} edges in the collaboration network.".format(_users,_pages,_edges))

# Write the graph to a GEXF file so we can visualize it
nx.write_graphml(collab_nobot_g,'collaboration_{0}.graphml'.format(page_title.replace(' ','_')))
/srv/paws/lib/python3.4/site-packages/ipykernel/__main__.py:11: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
/srv/paws/lib/python3.4/site-packages/ipykernel/__main__.py:12: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
There are 74,777 users, 379 pages, and 136,971 edges in the collaboration network.

Project bipartite network into a page-page graph

Project the two-mode/bipartite graph (having users connected to pages) into a weighted one-mode graph where pages are connected to pages if they share users in common.

page_collaboration_g = nx.bipartite.weighted_projected_graph(collab_nobot_g.to_undirected(),collab_nobot_pages)

# Identify edges having a weight less than 2, meaning that they share only one editor in common
single_user_overlaps = [(i,j) for (i,j,d) in page_collaboration_g.edges_iter(data=True) if d['weight'] < 2]

# Remove the edges with only a single user overlap
page_collaboration_g.remove_edges_from(single_user_overlaps)

# Report the number of nodes and edges
print("There are {0} nodes and {1} edges in the projected collaboration graph.".format(page_collaboration_g.number_of_nodes(), page_collaboration_g.number_of_edges()))

# Write the user_collaboration graph to disk
# But don't actually do it yet, since this graph isn't very helpful
#nx.write_graphml(page_collaboration_g,'collaboration_pages_{0}.graphml'.format(page_title.replace(' ','_')))
There are 379 nodes and 25349 edges in the projected collaboration graph.

You could similarly do this for users, but this requires much more memory than we have available within PAWS. You can do some things to remove the pendants (nodes having a degree of one). But it still generates super-hairball-y graphs.

This code is not executable because it will almost certainly cause your notebook to run out of memory and crash. I provide the code in case you want to run it locally on other data.

num_of_nodes = len(collab_nobot_g)-1 collab_nobot_dc = {u:dc*num_of_nodes for u,dc in nx.degree_centrality(collab_nobot_g).items()} pendant_users = [node for node,degree in collab_nobot_dc.items() if 'u:' in node and degree < 3] cg_no_pendants = collab_nobot_g.copy() cg_no_pendants.remove_nodes_from(pendant_users) cg_no_pendants_users = [n for n,d in cg_no_pendants.nodes_iter(data=True) if d['nodetype'] == 'user'] user_collaboration_g = nx.bipartite.weighted_projected_graph(cg_no_pendants,cg_no_pendants_users) user_collaboration_g.number_of_nodes(), user_collaboration_g.number_of_edges() nx.write_gexf(user_collaboration_g,'cwpg_users_cg_nobot_{0}.gexf'.format(page_title.replace(' ','_')))

Extract the backbone of the graph using the method from Serrano, et al. (2009). This method retains the most statistically-significant weighted edges for each node based on a cut-off value of alpha. Rather than using a global thresholding strategy (e.g., dropping all edges below a certain weight) it retains the most important edges for each node.

# Define the backbone extraction method
def extract_backbone(g, alpha): # Given an input graph g and a cutoff alpha
    backbone_graph = nx.Graph() # Create a new empty graph
    for node in g: # Go through all the nodes in the input graph
        k_n = len(g[node]) # How many connections does the node have
        if k_n > 1: # If it's greater than 1
            sum_w = sum( g[node][neighbor]['weight'] for neighbor in g[node] ) # Compute the sum of all the node's weighted edges
            for neighbor in g[node]: # For each neighboring node
                edgeWeight = g[node][neighbor]['weight'] # Get the edge weight
                pij = float(edgeWeight)/sum_w # Compute the fraction of this edge weight over all other edgeweights
                if (1-pij)**(k_n-1) < alpha: # Equation 2: If this edgeweight fraction is over the threshold
                    node_name = node[2:].replace('_',' ')
                    neighbor_name = neighbor[2:].replace('_',' ')
                    backbone_graph.add_edge(node_name, neighbor_name, weight = edgeWeight) # Add it to the backbone graph
    return backbone_graph

# Apply the function to the page_collaboration graph with a cut-off value of 0.1
pg_backbone_10_g = extract_backbone(page_collaboration_g,.10)

# Apply the function to the page_collaboration graph with a cut-off value of 0.01
pg_backbone_01_g = extract_backbone(page_collaboration_g,.01)

# Find the largest connected component (LCC) since some algorithms break with multiple components
lcc_cpgbb10_g = max(nx.connected_component_subgraphs(pg_backbone_10_g),key=len)
lcc_cpgbb01_g = max(nx.connected_component_subgraphs(pg_backbone_01_g),key=len)

# Print out the number of nodes and edges
print("There are {0} nodes and {1} edges in the LCC of the 0.10 backbone graph.".format(lcc_cpgbb10_g.number_of_nodes(), lcc_cpgbb10_g.number_of_edges()))
print("There are {0} nodes and {1} edges in the LCC of the 0.01 backbone graph.".format(lcc_cpgbb01_g.number_of_nodes(), lcc_cpgbb01_g.number_of_edges()))

# Save the LCC of the backbone graph to disk
nx.write_graphml(lcc_cpgbb10_g,'collab_page_bb_10_{0}.graphml'.format(page_title.replace(' ','_')))
nx.write_graphml(lcc_cpgbb01_g,'collab_page_bb_01_{0}.graphml'.format(page_title.replace(' ','_')))
There are 268 nodes and 3538 edges in the LCC of the 0.10 backbone graph.
There are 125 nodes and 442 edges in the LCC of the 0.01 backbone graph.

Load the data for the projected backbone (0.01 threshold) collaboration graph.

# Load the hyperlink network data from disk into a networkx graph object
nx_hg = nx.read_graphml('collab_page_bb_01_{0}.graphml'.format(page_title.replace(' ','_')))

# Load the hyperlink network data from disk into a igraph graph object
#ig_cpbb10g = ig.read('collab_page_bb_10_{0}.graphml'.format(page_title.replace(' ','_')))
ig_cpbb01g = ig.read('collab_page_bb_01_{0}.graphml'.format(page_title.replace(' ','_')))

# Get statistics about the network to confirm it loaded properly
ig.summary(ig_cpbb01g) 
IGRAPH U-W- 125 442 -- 
+ attr: id (v), weight (e)

Run the function on the igraph version of the backbone (0.01 threshold) collaboration network.

This may take a minute or more since these are intensive calculations

# Run the community detection labelling on the igraph graph object
comparative_community_labels = comparative_community_detector(ig_cpbb01g)

# Convert the node labels into a dict-of-dicts keyed by page name and inner-dict containing community labels
comparative_community_labels_transposed = pd.DataFrame(comparative_community_labels).to_dict('index')

# Update each node in the networkx graph object to reflect the community membership labels
for _node in nx_hg.nodes():
    try:
        nx_hg.node[_node]['label'] = _node
        for (label,membership) in comparative_community_labels_transposed[_node].items():
            nx_hg.node[_node][label] = int(membership)
    except KeyError: # Concerning that some labels aren't present, but skip them for now
        print("Error in assigning \"{0}\" to a community.".format(_node))
        pass

# Write the labeled graph back to disk to visualize in Gephi
nx.write_graphml(nx_hg,'collab_page_bb_01_community_{0}.graphml'.format(page_title.replace(' ','_')))

Compute descriptive statistics for the collaboration network

 
cg_density = nx.bipartite.density(collab_nobot_g,collab_nobot_pages)
print('{0:.2%} of the possible edges actually exist.'.format(cg_density))
0.20% of the possible edges actually exist.

Identify the most well-connected nodes

Compute the directed degree centralities. The network is constructed such that users contribute to articles. It should be impossible for articles to contribute to articles, or users to contribute to users, or articles to contribute to users.

Following this enforced direction, in-degree values for articles should be non-zero and reflect the number of users who contributed to them while it should be zero for users. The out-degree values for users should be non-zero and reflect the number of articles they contributed to while it should be zero for pages.

cg_in_degree_d = {node:int(centrality*(len(collab_nobot_g) - 1)) for node,centrality in nx.in_degree_centrality(collab_nobot_g).items()}
cg_out_degree_d = {node:int(centrality*(len(collab_nobot_g) - 1)) for node,centrality in nx.out_degree_centrality(collab_nobot_g).items()}

Convert the in- and out-degree dictionaries from above into a DataFrame called "cg_degree_df" for collaboration degree centrality DataFrame. Also create new columns in cg_degree_df that label whether the row is for a page or a user to help with filtering later on.

cg_degree_df = pd.DataFrame({'In':cg_in_degree_d,'Out':cg_out_degree_d})
cg_degree_df['Page'] = pd.Series(cg_degree_df.index,index=cg_degree_df.index).str.contains('p:')
cg_degree_df['User'] = pd.Series(cg_degree_df.index,index=cg_degree_df.index).str.contains('u:')

Look at the nodes with the highest in-degree: these are articles and the number of unique users who contributed to them.

cg_degree_df['In'].sort_values(ascending=False).head(10)

Look at the nodes with the highest out-degree: these are uses and the number of unique articles (in the set) they contributed to.

cg_degree_df['Out'].sort_values(ascending=False).head(10)
u:Bender235          136
u:Rjwilmsi           134
u:Rich Farmbrough     99
u:Levineps            99
u:BD2412              96
u:Woohookitty         94
u:Neutrality          90
u:Hmains              85
u:J.delanoy           79
u:CommonsDelinker     78
Name: Out, dtype: int64

Are there any articles with only a single editor?

cg_degree_df.head()
In Out
p:114th_United_States_Congress 113 0
p:1967_USS_Forrestal_fire 68 0
p:1988_Republican_National_Convention 17 0
p:2004_Republican_National_Convention 73 0
p:2008_Republican_National_Convention 167 0

Look at the articles that only have a single contributor in the set of hyperlinked articles. Some of these look like major topics, but they aren't the main page themselves, but rather redirects that are placeholders pointing to another page elsewhere. For instance, an article like "September 11, 2001 attacks" exists but actually points to "September 11 attacks". The former page is an example of a redirect that has its own, but typically extremely short, revision history.

cg_degree_df.query('In == 1 & Page')['In']
p:Anchor_Books                         1
p:Campaign_finance_reform              1
p:Civil_rights                         1
p:Clinton_administration               1
p:Fort_McNair                          1
p:Greenhouse_gases                     1
p:HMO                                  1
p:Millbrook_Press                      1
p:Obama_health_care_plan               1
p:Phoenix_Gazette                      1
p:Plant_(person)                       1
p:Reader's_Digest_Press                1
p:September_11,_2001_attacks           1
p:War_in_Afghanistan_(2001-present)    1
p:War_in_Afghanistan_(2001–present)    1
Name: In, dtype: int64
in_degree_dist_df = cg_degree_df['In'].value_counts().reset_index()
out_degree_dist_df = cg_degree_df['Out'].value_counts().reset_index()
revision_dist_df = collab_nobot_edgelist_df['edits'].value_counts().reset_index()

f,ax = plt.subplots(1,1)
in_degree_dist_df.plot.scatter(x='index',y='In',ax=ax,c='blue',label='Page')
out_degree_dist_df.plot.scatter(x='index',y='Out',ax=ax,c='red',label='Editor')
revision_dist_df.plot.scatter(x='index',y='edits',ax=ax,c='green',label='Weight')
ax.set_xlabel('Connections/Edits')
ax.set_ylabel('Count')
ax.set_xscale('symlog')
ax.set_yscale('symlog')
ax.set_xlim((0,1e3))
ax.set_ylim((0,1e5));