Wikidata is one of the newer families added to the wikimedia projects. It acts as a central storage for structured data for all the wikimedia projects. It solves 2 major problems that wikimedia projects used to face:

  • Copied information (If the information changes it has to be manually edited in each website)
  • Unstructured data (A better method of storing data which can be queried using SQL or similar)
In [1]:
import pywikibot
In [2]:
wikidata = pywikibot.Site('wikidata', 'wikidata')
DataSite("wikidata", "wikidata")
In [3]:
testwikidata = pywikibot.Site('test', 'wikidata')
DataSite("test", "wikidata")

1. Items and Properties

In wikidata, every page is either an item or a property.

Items are used to represent all the things in human knowledge: including topics, concepts, and objects. For example; color (Q1075), Albert Einstein (Q937), Earth (Q2), and cat (Q146) are all considered as items in Wikidata.

Properties are the things that describe and define a item. Each data bit related to an item is a type of property. Properties are different for different types of items. Examples of properties for Python (Q28865) are: license (P275), bug tracking system (P1401), official website (P856), Stack exchange tag (P1482).

The wikidata API helps to query data form wikidata using SparkQL to filter properties. So, for example you can find all countries in the world which have a population between 10 million to 300 million with just 1 query. In the earlier category interface, there would have to be a category for this information or you would have to parse evry country's page to find the population using Natural language processing!

Exercise - Contribute to wikidata

Find an item on wikidata and edit it to add some additional information. Some tips on finding an item:

  • Your favourite book or author
  • Your city or state, or a popular place near your hometown
  • A software, tool, or programming language that you like using
  • If you can type in a native language, try translating the label/title of an item

2. Examples of bots with wikidata

The wikidata game

The wikidata game is an example of a bot which helps users contribute better to wikidata. You can check out the wikidata game at

The wikidata game finds possible pages which do not have a certain type of information using the structured queries (For example human items which have no gender) and shows the wikipedia page related to that item. Then the user is expected to identify a specific property of the item (For example male or female for the property gender).

The wikidata resonator

Wikidata resonator is a script which pulls data from wikidata and joins all the property data of an item to form a descriptive paragraph about the item. You can check it out at

Other than forming a descriptive paragraph of the item, it also groups similar properites like "Relative" group, "External sources" etc. based on some simple conditions. It also creates a timeline of the item if possible based on the properties that would have a datetime data type. It also generates a QR Code to use for the related wikipedia page and shows related images pulled from the related commons.wikimedia page!

3. Fetching data from wikidata using pywikibot

The first thing we're going to do is figure out how to get data from wikidata using pywikibot. Here, it's not like a generic mediawiki website, where the text of the page is pulled. Here, the data is structured. We use a ItemPage class in pywikibot which can handle these items in a better way:

In [4]:
itempage = pywikibot.ItemPage(wikidata, "Q42")  # Q42 is Douglas Adams

In wikidata, the page.text won't work like other mediawiki websites where it gives a string of the whole content in the page. The data and properties are stored in Python dictionary structure:

In [6]:
WARNING: API error mwoauth-invalid-authorization-invalid-user: The authorization headers in your request are for a user that does not exist here
NoUsername                                Traceback (most recent call last)
<ipython-input-6-2b4751ffe47a> in <module>()
----> 1 itempage.get()

/srv/paws/lib/python3.4/site-packages/pywikibot/ in get(self, force, get_redirect, *args, **kwargs)
   3924         @raise NotImplementedError: a value in args or kwargs
   3925         """
-> 3926         data = super(ItemPage, self).get(force, *args, **kwargs)
   3928         if self.isRedirectPage() and not get_redirect:

/srv/paws/lib/python3.4/site-packages/pywikibot/ in get(self, force, *args, **kwargs)
   3539       [pid] = []
   3540                 for claim in self._content['claims'][pid]:
-> 3541                     c = Claim.fromJSON(self.repo, claim)
   3542                     c.on_item = self
   3543           [pid].append(c)

/srv/paws/lib/python3.4/site-packages/pywikibot/ in fromJSON(cls, site, data)
   4335             # The default covers string, url types
   4336    = Claim.TARGET_CONVERTER.get(
-> 4337                 claim.type, lambda value, site: value)(value, site)
   4338         if 'rank' in data:  # References/Qualifiers don't have ranks
   4339             claim.rank = data['rank']

/srv/paws/lib/python3.4/site-packages/pywikibot/ in <lambda>(value, site)
   4275             ItemPage(site, 'Q' + str(value['numeric-id'])),
   4276         'commonsMedia': lambda value, site:
-> 4277             FilePage(pywikibot.Site('commons', 'commons'), value),
   4278         'globe-coordinate': pywikibot.Coordinate.fromWikibase,
   4279         'time': lambda value, site: pywikibot.WbTime.fromWikibase(value),

/srv/paws/lib/python3.4/site-packages/pywikibot/tools/ in wrapper(*__args, **__kw)
   1445                              cls, depth)
   1446                     del __kw[old_arg]
-> 1447             return obj(*__args, **__kw)
   1449         if not __debug__:

/srv/paws/lib/python3.4/site-packages/pywikibot/ in __init__(self, source, title)
   2300         """Constructor."""
   2301         self._file_revisions = {}  # dictionary to cache File history.
-> 2302         super(FilePage, self).__init__(source, title, 6)
   2303         if self.namespace() != 6:
   2304             raise ValueError(u"'%s' is not in the file namespace!" % title)

/srv/paws/lib/python3.4/site-packages/pywikibot/tools/ in wrapper(*__args, **__kw)
   1445                              cls, depth)
   1446                     del __kw[old_arg]
-> 1447             return obj(*__args, **__kw)
   1449         if not __debug__:

/srv/paws/lib/python3.4/site-packages/pywikibot/tools/ in wrapper(*__args, **__kw)
   1445                              cls, depth)
   1446                     del __kw[old_arg]
-> 1447             return obj(*__args, **__kw)
   1449         if not __debug__:

/srv/paws/lib/python3.4/site-packages/pywikibot/ in __init__(self, source, title, ns)
   2176                 raise ValueError(u'Title must be specified and not empty '
   2177                                  'if source is a Site.')
-> 2178         super(Page, self).__init__(source, title, ns)
   2180     @deprecate_arg("get_redirect", None)

/srv/paws/lib/python3.4/site-packages/pywikibot/ in __init__(self, source, title, ns)
    159         if isinstance(source,
--> 160             self._link = Link(title, source=source, defaultNamespace=ns)
    161             self._revisions = {}
    162         elif isinstance(source, Page):

/srv/paws/lib/python3.4/site-packages/pywikibot/ in __init__(self, text, source, defaultNamespace)
   4942         # See bug T104864, defaultNamespace might have been deleted.
   4943         try:
-> 4944             self._defaultns = self._source.namespaces[defaultNamespace]
   4945         except KeyError:
   4946             self._defaultns = defaultNamespace

/srv/paws/lib/python3.4/site-packages/pywikibot/ in namespaces(self)
   1012         """Return dict of valid namespaces on this wiki."""
   1013         if not hasattr(self, '_namespaces'):
-> 1014             self._namespaces = NamespacesDict(self._build_namespaces())
   1015         return self._namespaces

/srv/paws/lib/python3.4/site-packages/pywikibot/ in _build_namespaces(self)
   2608         # For versions lower than 1.14, APISite needs to override
   2609         # the defaults defined in Namespace.
-> 2610         is_mw114 = MediaWikiVersion(self.version()) >= MediaWikiVersion('1.14')
   2612         for nsdata in self.siteinfo.get('namespaces', cache=False).values():

/srv/paws/lib/python3.4/site-packages/pywikibot/ in version(self)
   2715         if not version:
   2716             try:
-> 2717                 version = self.siteinfo.get('generator', expiry=1).split(' ')[1]
   2718             except
   2719                 # May occur if you are not logged in (no API read permissions).

/srv/paws/lib/python3.4/site-packages/pywikibot/ in get(self, key, get_default, cache, expiry)
   1674                 elif not Siteinfo._is_expired(cached[1], expiry):
   1675                     return copy.deepcopy(cached[0])
-> 1676         preloaded = self._get_general(key, expiry)
   1677         if not preloaded:
   1678             preloaded = self._get_siteinfo(key, expiry)[key]

/srv/paws/lib/python3.4/site-packages/pywikibot/ in _get_general(self, key, expiry)
   1620                         u"', '".join(props)), _logger)
   1621             props += ['general']
-> 1622             default_info = self._get_siteinfo(props, expiry)
   1623             for prop in props:
   1624                 self._cache[prop] = default_info[prop]

/srv/paws/lib/python3.4/site-packages/pywikibot/ in _get_siteinfo(self, prop, expiry)
   1546             # warnings are handled later
   1547             request._warning_handler = warn_handler
-> 1548             data = request.submit()
   1549         except api.APIError as e:
   1550             if e.code == 'siunknown_siprop':

/srv/paws/lib/python3.4/site-packages/pywikibot/data/ in submit(self)
   2340         cached_available = self._load_cache()
   2341         if not cached_available:
-> 2342             self._data = super(CachedRequest, self).submit()
   2343             self._write_cache(self._data)
   2344         else:

/srv/paws/lib/python3.4/site-packages/pywikibot/data/ in submit(self)
   2173                     continue
   2174                 raise NoUsername('Failed OAuth authentication for %s: %s'
-> 2175                                  % (, info))
   2176             # raise error
   2177             try:

NoUsername: Failed OAuth authentication for commons:commons: The authorization headers in your request are for a user that does not exist here

If you want to get the data using the title of the page rather than the item ID, we can get the wikidata article associated to a wikipedia page using:

In [ ]:
itempage == pywikibot.ItemPage.fromPage(pywikibot.Page(pywikibot.Site('en', 'wikipedia'), 'Douglas Adams'))

Let's take a closer look at the data items given by an item page. It is a dictionary with the following keys:

In [ ]:
itemdata = itempage.get()

Labels, Descriptions and Aliases

Labels are the name or title of the wikidata item.

Aliases are alternate labels for the same item in the same lanugage. For example, "Python (Q28865)" The programming language has the alias "Python language", "Python programming language", "/usr/bin/python", etc.

Descriptions are useful statements which can help distinguish items with similar labels. Wikidata items are unique only by their item ID (Qxx) hence the description helps differentiate behind "Python (Q271218)" the genus of reptiles, and "Python (Q28865)" the programming language, and "Python (Q15728)" the family of missiles!

As wikidata does not have a specific code/language (the code we use is "wikidata") it has data for all languages. Hence, the same item can have a different label in Engligh, Arabic, or French. So, these fields in the data are dictionaries with the key as the language code and the value as the label in that language.

In [ ]:

For convenience, after the itempage.get() is called, the data is stored in the page variable also:

In [ ]:
itemdata['labels'] == itempage.labels

Exercise - Check whether a given Item has a label in your native language

Find the language code for your native language and write a function to check if a given item has a label in that language, and what other aliases it has in that language.


Claims Are other wikidtaa pages that are linked to the given item using properties. The 'claims' are stored as another dictionary with the keys as property IDs (P1003, P1005, P1006, etc.) and the value is a list of objects

Hence, the claim is the value of all the properties that have been set for the given item.

In [1]:
NameError                                 Traceback (most recent call last)
<ipython-input-1-4a9f36e6e48c> in <module>()
----> 1 itemdata['claims']

NameError: name 'itemdata' is not defined
In [ ]:
# Similarly, this is available in the page object using:
itemdata['claims'] ==

Let's take a look at the P800 (notable work) for the item Q42 (Douglas Adams):

In [ ]:['P800']

There are multiple claims for the property "P800" and we can ask pywikibot to resolve the claim and fetch the data about the claim:

In [ ]:['P800'][0].getTarget()

So, we notice that the claim for the first "notable work (P800)" of "Douglas Adams (Q42)" is the item Q25169. As this is another ItemPage we can fetch the english label for this item by doing:

In [ ]:
p800_claim_target =['P800'][0].getTarget()

So, finally we were able to find one of the most notable work of Douglas Adams using the wikidata API exposed by pywikibot. Imagine doing the same in the english wikipedia !

Thought exercise: How would you figure out the most notable work of the author using the chunk of text given by an English wikipedia Page ?

Exercise - Check whether item is in India

Given a item ID, check whether the Item is in India by checking the value of the "country" property of the item. Write a function that checks this.

Hence, when the function is run on Q987 (New Delhi), it should give True but on Q62 (San Francisco) it should give False.

In [9]:
itemdelhi = pywikibot.ItemPage.fromPage(pywikibot.Page(pywikibot.Site('en', 'wikipedia'), 'New Delhi'))['P17']['P17'][0].getTarget().get()['P17'][0].qualifiers
print['P17'][0].getTarget().labels['en'] == "India"
  File "<ipython-input-9-3eb179ae28b2>", line 5
    print['P17'][0].getTarget().labels['en'] == "India"
SyntaxError: invalid syntax

6. Property Pages

Sometimes, it is important to be able to fetch data about the property we find itself. For example, if we want to list the english label of the property and the value in a tabular form.

To do this, we use a PropertyPage object to deal with properties:

In [ ]:
propertypage = pywikibot.PropertyPage(wikidata, 'P512')

In the PropertyPage, we again can access the data similar to how it was accessed in the ItemPage:

In [ ]:

4. Wikidata data types

On Wikidata, we've already seen ItemPages and PropertyPages. But sometimes, a Claim's value need not be another Itempage, and can be some other data type like text, number, datetime, etc. Pywikibot provides a class for each of these data-types for easier accesibility to the value and resolve the claim.

The Data types available in wikidata can be seen at:

The wikidata datatypes provided and the corresponding name of the wikidata data-type are:

  • Item - - Link to other items at the project.
  • Property - - Link to properties at the project.
  • Global Coordinate - pywikibot.Coordinate - Literal data for a geographical position given as a latitude-longitude pair in gms or decimal degrees.
  • Time - pywikibot.WbTime - Literal data field for a point in time.
  • Quantity - pywikibot.WbQuantity - Literal data field for a quantity that relates to some kind of well-defined unit.
  • Monolingual Text - pywikibot.MonoLingualText - Literal data field for a string that is not translated into other languages.

Data types mapping to str

Some types of wikidata have are made specially to show them using a different method for example, showing it as a link, etc. But they all map to the python str. They are:

  • String - str - Literal data field for a string of glyphs. Generally do not depend on language of reader.
  • URL - str - Literal data field for a URL.
  • External Identifier - str - Literal data field for an external identifier. External identifiers may automatically be linked to an authoritative resource for display.
  • Mathematical formula - str - Literal data field for mathematical expressions, formula, equations and such, expressed in a variant of LaTeX.
In [ ]:
# Item
item = pywikibot.ItemPage(wikidata, "Q42").get()['claims']['P31'][0].getTarget()
print("Type:", type(item))
print("Instance of Douglas Adams:", item, '(', item.get()['labels']['en'], ')')
In [ ]:
# Property
_property = pywikibot.PropertyPage(wikidata, "Property:P31")
print("Type:", type(_property))
print("Property 'instance of':", _property, '(', _property.labels['en'], ')')
In [ ]:
# Global Coordinate
coord = pywikibot.ItemPage(wikidata, "Q668").get()['claims']['P625'][0].getTarget()
print("Type:", type(coord))
print("Coordinate location of India:", coord)
In [ ]:
# Time
_time = pywikibot.ItemPage(wikidata, "Q28865").get()['claims']['P571'][0].getTarget()
print("Type:", type(_time))
print("Inception of Python (programming language):", _time)
In [ ]:
# Quantity
qty = pywikibot.ItemPage(wikidata, "Q668").get()['claims']['P1082'][0].getTarget()
print("Type:", type(qty))
print("Population in India:", qty)
In [ ]:
# Monolingual text
monolingual_text = pywikibot.ItemPage(wikidata, "Q42").get()['claims']['P1477'][0].getTarget()
print("Type:", type(monolingual_text))
print("Birth name of Douglas Adams:", monolingual_text)
In [ ]:
# String
_string = pywikibot.ItemPage(wikidata, "Q28865").get()['claims']['P348'][0].getTarget()
print("Type:", type(_string))
print("Version of Python:", _string)
In [ ]:
# Mathematical Formula
formula = pywikibot.ItemPage(wikidata, "Q11518").get()['claims']['P2534'][0].getTarget()
print("Type:", type(formula))
print("Formula of Pythagorean theorem:", formula)

Exercise - Find URL and External identifier

Using the API, find the type and value of the Official website (P856) and the Freebase identifier (P646) of Python (Q28865).

5. Adding more meaning to Properties

Frequently, a property value may require additional data. For example, consider the property educated at (P69) in the earlier data fetched from Douglas Adams (Q42). It can be seen on WikiData that "St John's College" is mentioned as one of his education schools from "start time" of 1971 to "end time" of 1974. And it also says his "academic major" is English literature and "academic degree" was Bachelor of Arts.


All this information would not be found if Wikidata was restricted to a (property, value) storage structure. Hence, Wikidata also allows Qualifiers. Qualifiers expand on the dta provided by a (property, value) pair by giving it context. Qualifiers also consist of a (property, value) !

In the above example, the properties which are being used as qualifiers are:

In [ ]:['P69']
In [ ]:['P69'][0].getTarget().get()['P69'][0].getTarget().labels['en']
In [ ]:['P69'][0].qualifiers

The qualifiers are again claims, as they are similar to the (property, value) pair for item pages. Let us see what the value of the qualifier is by resolving the claim:

In [ ]:
# Fetch the label of the P512 (academic degree) property
claim =['P69'][0]

Some qualifiers may have a value which is not another item, for example the "start date" qualifier. In such cases, we need to check the type of the item:

In [ ]:
claim =['P69'][0]

Here, WBTime is a pywikibot class which handles the format of WikiBase Time. Wiki base is the underlying technology that powers the structured editing and so on of Wikidata.

Other functions to modify qualifiers are:

  • claim.removeQualifier()
  • claim.addQualifier()
  • claim.has_qualifier()

Exercise - Find time studied at school

In the case of Douglas Adams as we saw, there are 2 schools mentioned. Using the start time and end time, find the number of years that he studdied in any school and print it out in the format:

<school name>: <start year> to <end year> => <n> years

Reference (Sources)

Other than the qualifier, we would also want the source of the data. Hence, the reference or source field helps in adding, removing, editing these:

In [ ]:['P69'][0].getSources()

Again, the source is a (property, value) where the property describes what type of source it is. Some popular properties are:

A source is again a list of (property, value) tuples. It can have additional properties like: "original language of work", "publisher", "author", "title", "retrieved", etc. if necessary.

Let us take a look at a source here:

In [ ]:
source =['P69'][0].getSources()[0]
In [ ]:
# Get the value of the first tuple in the source:
In [ ]:

Exercise - Check number of values that have a source as English Wikipedia

A large number of data in wikidata was pulled from the "English Wikipedia". Go through all (property, value) pairs and check how many of them were taken from the English Wikipedia (Q328).

7. Search wikidata using python

The Search on wikidata can be triggered using the python api too. To do this, we use the class which provides with helper functions to query the wikidata website and fetch results:

In [ ]:
india_search =
    parameters={"action": "wbsearchentities",
                "format": "json",
                "type": "item",
                "language": "en",
                "search": "India"})


As you can see, this function simply returns the dictionary search results directly. This needs to be parsed to be more useful. By modifying the parameters we can also search for the item using other languages and types.

Exercise: Create an ItemPage for every search result

The search result given by the API is a python dictionary. But it has a lot of data which may not be very useful to us.

  1. Loop over every search item and create a ItemPage object and store these in a list.
  2. Finally, loop over the ItemPage list and print the english label for each item in the search using the ItemPage class.

8. Further Reading

To read more ways of using pywikibot to access wikidata, go to

SparQL queries are SQL like queries that can be run on Wikidata to fetch data from it. To try out SparkQL queries and visualize the data using nice plots, you can use It has a lot of example SparQL queries which can be useful to learn SparQL.