%pylab inline

Populating the interactive namespace from numpy and matplotlib

/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/IPython/core/magics/pylab.py:161: UserWarning: pylab import has clobbered these variables: ['test']
%matplotlib prevents importing * from pylab and numpy
"\n%matplotlib prevents importing * from pylab and numpy"


# Introduction¶

In general, a learning problem considers a set of n samples of data and then tries to predict properties of unknown data. If each sample is more than a single number and, for instance, a multi-dimensional entry (aka multivariate data), it is said to have several attributes or features.

We can separate learning problems in a few large categories:

• supervised learning, in which the data comes with additional attributes that we want to predict (Click here to go to the scikit-learn supervised learning page).This problem can be either:

• classification: samples belong to two or more classes and we want to learn from already labeled data how to predict the class of unlabeled data. An example of classification problem would be the handwritten digit recognition example, in which the aim is to assign each input vector to one of a finite number of discrete categories. Another way to think of classification is as a discrete (as opposed to continuous) form of supervised learning where one has a limited number of categories and for each of the n samples provided, one is to try to label them with the correct category or class.
• regression: if the desired output consists of one or more continuous variables, then the task is called regression. An example of a regression problem would be the prediction of the length of a salmon as a function of its age and weight.
• unsupervised learning, in which the training data consists of a set of input vectors x without any corresponding target values. The goal in such problems may be to discover groups of similar examples within the data, where it is called clustering, or to determine the distribution of data within the input space, known as density estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization (Click here to go to the Scikit-Learn unsupervised learning page).

Training set and testing set

Machine learning is about learning some properties of a data set and applying them to new data. This is why a common practice in machine learning to evaluate an algorithm is to split the data at hand into two sets, one that we call the training set on which we learn data properties and one that we call the testing set on which we test these properties.

from sklearn import datasets

print(digits.DESCR)

Optical Recognition of Handwritten Digits Data Set
===================================================

Notes
-----
Data Set Characteristics:
:Number of Instances: 5620
:Number of Attributes: 64
:Attribute Information: 8x8 image of integer pixels in the range 0..16.
:Missing Attribute Values: None
:Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)
:Date: July; 1998

This is a copy of the test set of the UCI ML hand-written digits datasets
http://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits

The data set contains images of hand-written digits: 10 classes where
each class refers to a digit.

Preprocessing programs made available by NIST were used to extract
normalized bitmaps of handwritten digits from a preprinted form. From a
total of 43 people, 30 contributed to the training set and different 13
to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of
4x4 and the number of on pixels are counted in each block. This generates
an input matrix of 8x8 where each element is an integer in the range
0..16. This reduces dimensionality and gives invariance to small
distortions.

For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.
T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.
L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,
1994.

References
----------
- C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their
Applications to Handwritten Digit Recognition, MSc Thesis, Institute of
Graduate Studies in Science and Engineering, Bogazici University.
- E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
- Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.
Linear dimensionalityreduction using relevance weighted LDA. School of
Electrical and Electronic Engineering Nanyang Technological University.
2005.
- Claudio Gentile. A New Approximate Maximal Margin Classification
Algorithm. NIPS. 2000.


digits.data

array([[ 0.,  0.,  5., ...,  0.,  0.,  0.],
[ 0.,  0.,  0., ..., 10.,  0.,  0.],
[ 0.,  0.,  0., ..., 16.,  9.,  0.],
...,
[ 0.,  0.,  1., ...,  6.,  0.,  0.],
[ 0.,  0.,  2., ..., 12.,  0.,  0.],
[ 0.,  0., 10., ..., 12.,  1.,  0.]])
digits.data.shape

(1797, 64)
digits.images.shape

(1797, 8, 8)
plt.imshow(digits.images[39], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()

digits.target

array([0, 1, 2, ..., 8, 9, 8])
digits.target_names

array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
print(iris.DESCR)

Iris Plants Database
====================

Notes
-----
Data Set Characteristics:
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:

============== ==== ==== ======= ===== ====================
Min  Max   Mean    SD   Class Correlation
============== ==== ==== ======= ===== ====================
sepal length:   4.3  7.9   5.84   0.83    0.7826
sepal width:    2.0  4.4   3.05   0.43   -0.4194
petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)
============== ==== ==== ======= ===== ====================

:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
:Date: July, 1988

This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris

The famous Iris database, first used by Sir R.A Fisher

This is perhaps the best known database to be found in the
pattern recognition literature.  Fisher's paper is a classic in the field and
is referenced frequently to this day.  (See Duda & Hart, for example.)  The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant.  One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.

References
----------
- Fisher,R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments".  IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...


iris.data

array([[5.1, 3.5, 1.4, 0.2],
[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2],
[5.4, 3.9, 1.7, 0.4],
[4.6, 3.4, 1.4, 0.3],
[5. , 3.4, 1.5, 0.2],
[4.4, 2.9, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.1],
[5.4, 3.7, 1.5, 0.2],
[4.8, 3.4, 1.6, 0.2],
[4.8, 3. , 1.4, 0.1],
[4.3, 3. , 1.1, 0.1],
[5.8, 4. , 1.2, 0.2],
[5.7, 4.4, 1.5, 0.4],
[5.4, 3.9, 1.3, 0.4],
[5.1, 3.5, 1.4, 0.3],
[5.7, 3.8, 1.7, 0.3],
[5.1, 3.8, 1.5, 0.3],
[5.4, 3.4, 1.7, 0.2],
[5.1, 3.7, 1.5, 0.4],
[4.6, 3.6, 1. , 0.2],
[5.1, 3.3, 1.7, 0.5],
[4.8, 3.4, 1.9, 0.2],
[5. , 3. , 1.6, 0.2],
[5. , 3.4, 1.6, 0.4],
[5.2, 3.5, 1.5, 0.2],
[5.2, 3.4, 1.4, 0.2],
[4.7, 3.2, 1.6, 0.2],
[4.8, 3.1, 1.6, 0.2],
[5.4, 3.4, 1.5, 0.4],
[5.2, 4.1, 1.5, 0.1],
[5.5, 4.2, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.1],
[5. , 3.2, 1.2, 0.2],
[5.5, 3.5, 1.3, 0.2],
[4.9, 3.1, 1.5, 0.1],
[4.4, 3. , 1.3, 0.2],
[5.1, 3.4, 1.5, 0.2],
[5. , 3.5, 1.3, 0.3],
[4.5, 2.3, 1.3, 0.3],
[4.4, 3.2, 1.3, 0.2],
[5. , 3.5, 1.6, 0.6],
[5.1, 3.8, 1.9, 0.4],
[4.8, 3. , 1.4, 0.3],
[5.1, 3.8, 1.6, 0.2],
[4.6, 3.2, 1.4, 0.2],
[5.3, 3.7, 1.5, 0.2],
[5. , 3.3, 1.4, 0.2],
[7. , 3.2, 4.7, 1.4],
[6.4, 3.2, 4.5, 1.5],
[6.9, 3.1, 4.9, 1.5],
[5.5, 2.3, 4. , 1.3],
[6.5, 2.8, 4.6, 1.5],
[5.7, 2.8, 4.5, 1.3],
[6.3, 3.3, 4.7, 1.6],
[4.9, 2.4, 3.3, 1. ],
[6.6, 2.9, 4.6, 1.3],
[5.2, 2.7, 3.9, 1.4],
[5. , 2. , 3.5, 1. ],
[5.9, 3. , 4.2, 1.5],
[6. , 2.2, 4. , 1. ],
[6.1, 2.9, 4.7, 1.4],
[5.6, 2.9, 3.6, 1.3],
[6.7, 3.1, 4.4, 1.4],
[5.6, 3. , 4.5, 1.5],
[5.8, 2.7, 4.1, 1. ],
[6.2, 2.2, 4.5, 1.5],
[5.6, 2.5, 3.9, 1.1],
[5.9, 3.2, 4.8, 1.8],
[6.1, 2.8, 4. , 1.3],
[6.3, 2.5, 4.9, 1.5],
[6.1, 2.8, 4.7, 1.2],
[6.4, 2.9, 4.3, 1.3],
[6.6, 3. , 4.4, 1.4],
[6.8, 2.8, 4.8, 1.4],
[6.7, 3. , 5. , 1.7],
[6. , 2.9, 4.5, 1.5],
[5.7, 2.6, 3.5, 1. ],
[5.5, 2.4, 3.8, 1.1],
[5.5, 2.4, 3.7, 1. ],
[5.8, 2.7, 3.9, 1.2],
[6. , 2.7, 5.1, 1.6],
[5.4, 3. , 4.5, 1.5],
[6. , 3.4, 4.5, 1.6],
[6.7, 3.1, 4.7, 1.5],
[6.3, 2.3, 4.4, 1.3],
[5.6, 3. , 4.1, 1.3],
[5.5, 2.5, 4. , 1.3],
[5.5, 2.6, 4.4, 1.2],
[6.1, 3. , 4.6, 1.4],
[5.8, 2.6, 4. , 1.2],
[5. , 2.3, 3.3, 1. ],
[5.6, 2.7, 4.2, 1.3],
[5.7, 3. , 4.2, 1.2],
[5.7, 2.9, 4.2, 1.3],
[6.2, 2.9, 4.3, 1.3],
[5.1, 2.5, 3. , 1.1],
[5.7, 2.8, 4.1, 1.3],
[6.3, 3.3, 6. , 2.5],
[5.8, 2.7, 5.1, 1.9],
[7.1, 3. , 5.9, 2.1],
[6.3, 2.9, 5.6, 1.8],
[6.5, 3. , 5.8, 2.2],
[7.6, 3. , 6.6, 2.1],
[4.9, 2.5, 4.5, 1.7],
[7.3, 2.9, 6.3, 1.8],
[6.7, 2.5, 5.8, 1.8],
[7.2, 3.6, 6.1, 2.5],
[6.5, 3.2, 5.1, 2. ],
[6.4, 2.7, 5.3, 1.9],
[6.8, 3. , 5.5, 2.1],
[5.7, 2.5, 5. , 2. ],
[5.8, 2.8, 5.1, 2.4],
[6.4, 3.2, 5.3, 2.3],
[6.5, 3. , 5.5, 1.8],
[7.7, 3.8, 6.7, 2.2],
[7.7, 2.6, 6.9, 2.3],
[6. , 2.2, 5. , 1.5],
[6.9, 3.2, 5.7, 2.3],
[5.6, 2.8, 4.9, 2. ],
[7.7, 2.8, 6.7, 2. ],
[6.3, 2.7, 4.9, 1.8],
[6.7, 3.3, 5.7, 2.1],
[7.2, 3.2, 6. , 1.8],
[6.2, 2.8, 4.8, 1.8],
[6.1, 3. , 4.9, 1.8],
[6.4, 2.8, 5.6, 2.1],
[7.2, 3. , 5.8, 1.6],
[7.4, 2.8, 6.1, 1.9],
[7.9, 3.8, 6.4, 2. ],
[6.4, 2.8, 5.6, 2.2],
[6.3, 2.8, 5.1, 1.5],
[6.1, 2.6, 5.6, 1.4],
[7.7, 3. , 6.1, 2.3],
[6.3, 3.4, 5.6, 2.4],
[6.4, 3.1, 5.5, 1.8],
[6. , 3. , 4.8, 1.8],
[6.9, 3.1, 5.4, 2.1],
[6.7, 3.1, 5.6, 2.4],
[6.9, 3.1, 5.1, 2.3],
[5.8, 2.7, 5.1, 1.9],
[6.8, 3.2, 5.9, 2.3],
[6.7, 3.3, 5.7, 2.5],
[6.7, 3. , 5.2, 2.3],
[6.3, 2.5, 5. , 1.9],
[6.5, 3. , 5.2, 2. ],
[6.2, 3.4, 5.4, 2.3],
[5.9, 3. , 5.1, 1.8]])
iris.data.shape

(150, 4)
iris.feature_names

['sepal length (cm)',
'sepal width (cm)',
'petal length (cm)',
'petal width (cm)']
iris.target

array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
iris.target_names

array(['setosa', 'versicolor', 'virginica'], dtype='<U10')
print(diabetes.DESCR)

Diabetes dataset
================

Notes
-----

Ten baseline variables, age, sex, body mass index, average blood
pressure, and six blood serum measurements were obtained for each of n =
442 diabetes patients, as well as the response of interest, a
quantitative measure of disease progression one year after baseline.

Data Set Characteristics:

:Number of Instances: 442

:Number of Attributes: First 10 columns are numeric predictive values

:Target: Column 11 is a quantitative measure of disease progression one year after baseline

:Attributes:
:Age:
:Sex:
:Body mass index:
:Average blood pressure:
:S1:
:S2:
:S3:
:S4:
:S5:
:S6:

Note: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times n_samples (i.e. the sum of squares of each column totals 1).

Source URL:
http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html

Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) "Least Angle Regression," Annals of Statistics (with discussion), 407-499.
(http://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)


diabetes.data

array([[ 0.03807591,  0.05068012,  0.06169621, ..., -0.00259226,
0.01990842, -0.01764613],
[-0.00188202, -0.04464164, -0.05147406, ..., -0.03949338,
-0.06832974, -0.09220405],
[ 0.08529891,  0.05068012,  0.04445121, ..., -0.00259226,
0.00286377, -0.02593034],
...,
[ 0.04170844,  0.05068012, -0.01590626, ..., -0.01107952,
-0.04687948,  0.01549073],
[-0.04547248, -0.04464164,  0.03906215, ...,  0.02655962,
0.04452837, -0.02593034],
[-0.04547248, -0.04464164, -0.0730303 , ..., -0.03949338,
-0.00421986,  0.00306441]])
diabetes.data.shape

(442, 10)
diabetes.feature_names

['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
diabetes.target

array([151.,  75., 141., 206., 135.,  97., 138.,  63., 110., 310., 101.,
69., 179., 185., 118., 171., 166., 144.,  97., 168.,  68.,  49.,
68., 245., 184., 202., 137.,  85., 131., 283., 129.,  59., 341.,
87.,  65., 102., 265., 276., 252.,  90., 100.,  55.,  61.,  92.,
259.,  53., 190., 142.,  75., 142., 155., 225.,  59., 104., 182.,
128.,  52.,  37., 170., 170.,  61., 144.,  52., 128.,  71., 163.,
150.,  97., 160., 178.,  48., 270., 202., 111.,  85.,  42., 170.,
200., 252., 113., 143.,  51.,  52., 210.,  65., 141.,  55., 134.,
42., 111.,  98., 164.,  48.,  96.,  90., 162., 150., 279.,  92.,
83., 128., 102., 302., 198.,  95.,  53., 134., 144., 232.,  81.,
104.,  59., 246., 297., 258., 229., 275., 281., 179., 200., 200.,
173., 180.,  84., 121., 161.,  99., 109., 115., 268., 274., 158.,
107.,  83., 103., 272.,  85., 280., 336., 281., 118., 317., 235.,
60., 174., 259., 178., 128.,  96., 126., 288.,  88., 292.,  71.,
197., 186.,  25.,  84.,  96., 195.,  53., 217., 172., 131., 214.,
59.,  70., 220., 268., 152.,  47.,  74., 295., 101., 151., 127.,
237., 225.,  81., 151., 107.,  64., 138., 185., 265., 101., 137.,
143., 141.,  79., 292., 178.,  91., 116.,  86., 122.,  72., 129.,
142.,  90., 158.,  39., 196., 222., 277.,  99., 196., 202., 155.,
77., 191.,  70.,  73.,  49.,  65., 263., 248., 296., 214., 185.,
78.,  93., 252., 150.,  77., 208.,  77., 108., 160.,  53., 220.,
154., 259.,  90., 246., 124.,  67.,  72., 257., 262., 275., 177.,
71.,  47., 187., 125.,  78.,  51., 258., 215., 303., 243.,  91.,
150., 310., 153., 346.,  63.,  89.,  50.,  39., 103., 308., 116.,
145.,  74.,  45., 115., 264.,  87., 202., 127., 182., 241.,  66.,
94., 283.,  64., 102., 200., 265.,  94., 230., 181., 156., 233.,
60., 219.,  80.,  68., 332., 248.,  84., 200.,  55.,  85.,  89.,
31., 129.,  83., 275.,  65., 198., 236., 253., 124.,  44., 172.,
114., 142., 109., 180., 144., 163., 147.,  97., 220., 190., 109.,
191., 122., 230., 242., 248., 249., 192., 131., 237.,  78., 135.,
244., 199., 270., 164.,  72.,  96., 306.,  91., 214.,  95., 216.,
263., 178., 113., 200., 139., 139.,  88., 148.,  88., 243.,  71.,
77., 109., 272.,  60.,  54., 221.,  90., 311., 281., 182., 321.,
58., 262., 206., 233., 242., 123., 167.,  63., 197.,  71., 168.,
140., 217., 121., 235., 245.,  40.,  52., 104., 132.,  88.,  69.,
219.,  72., 201., 110.,  51., 277.,  63., 118.,  69., 273., 258.,
43., 198., 242., 232., 175.,  93., 168., 275., 293., 281.,  72.,
140., 189., 181., 209., 136., 261., 113., 131., 174., 257.,  55.,
84.,  42., 146., 212., 233.,  91., 111., 152., 120.,  67., 310.,
94., 183.,  66., 173.,  72.,  49.,  64.,  48., 178., 104., 132.,
220.,  57.])

# Learning and predicting¶

## Regression¶

from sklearn import linear_model

regress = linear_model.LinearRegression(fit_intercept=True)

diabetes.data[:-10].shape, diabetes.data[432:].shape

((432, 10), (10, 10))
train = diabetes.data[:-10]
train_target = diabetes.target[:-10]
test = diabetes.data[432:]
test_target = diabetes.target[432:]

regress.fit(X=train,y=train_target)

LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
regress.score(train,train_target)

0.5126458619823021
regress.predict(test),test_target

(array([220.07953738,  60.91343463, 133.06696665, 120.50446129,
52.70704634, 193.37341541, 102.33852447, 122.03295358,
212.22074082,  52.41945058]),
array([173.,  72.,  49.,  64.,  48., 178., 104., 132., 220.,  57.]))
regress.score(test,test_target)

0.6289345672042137
from sklearn.ensemble import RandomForestRegressor

regress = RandomForestRegressor()

regress.fit(X=train,y=train_target)
regress.score(train,train_target)

0.8903092782538485
regress.predict(test),test_target

(array([240. ,  87. ,  91.2, 110.4,  75.8, 243.3, 137.7, 106.4, 171.1,
134.1]),
array([173.,  72.,  49.,  64.,  48., 178., 104., 132., 220.,  57.]))
regress.score(test,test_target)

0.3225023544575357

## Classification¶

from sklearn.ensemble import RandomForestClassifier

clf = RandomForestClassifier()

digits.data.shape[0]-100

1697
train = digits.data[:-100]
train_target = digits.target[:-100]
test = digits.data[digits.data.shape[0]-100:]
test_target = digits.target[digits.data.shape[0]-100:]

train.shape,test.shape

((1697, 64), (100, 64))
clf.fit(train,train_target)

RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
clf.predict(test),test_target

(array([0, 9, 5, 5, 6, 5, 0, 9, 8, 9, 8, 4, 1, 7, 7, 3, 5, 1, 0, 0, 2, 2,
7, 8, 2, 0, 1, 2, 6, 3, 8, 7, 5, 3, 4, 6, 6, 6, 4, 9, 1, 5, 0, 9,
5, 2, 8, 2, 0, 0, 1, 7, 6, 3, 2, 1, 7, 4, 6, 3, 1, 3, 9, 1, 7, 6,
8, 4, 1, 1, 4, 0, 5, 3, 6, 9, 6, 1, 7, 5, 4, 4, 7, 2, 8, 2, 2, 5,
7, 9, 5, 4, 8, 8, 4, 9, 0, 8, 9, 8]),
array([0, 9, 5, 5, 6, 5, 0, 9, 8, 9, 8, 4, 1, 7, 7, 3, 5, 1, 0, 0, 2, 2,
7, 8, 2, 0, 1, 2, 6, 3, 3, 7, 3, 3, 4, 6, 6, 6, 4, 9, 1, 5, 0, 9,
5, 2, 8, 2, 0, 0, 1, 7, 6, 3, 2, 1, 7, 4, 6, 3, 1, 3, 9, 1, 7, 6,
8, 4, 3, 1, 4, 0, 5, 3, 6, 9, 6, 1, 7, 5, 4, 4, 7, 2, 8, 2, 2, 5,
7, 9, 5, 4, 8, 8, 4, 9, 0, 8, 9, 8]))
clf.score(train,train_target)

1.0
clf.score(test,test_target)

0.97

# Cross Validation¶

from sklearn import model_selection
import numpy as np
from sklearn import datasets, linear_model
from sklearn.model_selection import cross_validate
from sklearn.metrics.scorer import make_scorer
from sklearn.metrics import confusion_matrix
from sklearn.svm import LinearSVC
X = diabetes.data[:150]
y = diabetes.target[:150]
regress = linear_model.LinearRegression(fit_intercept=True)

cross_validate?

cv_results = cross_validate(regress, X, y)

cv_results

/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True
warnings.warn(*warn_args, **warn_kwargs)

{'fit_time': array([0.00081015, 0.00045991, 0.00039101]),
'score_time': array([0.0003829 , 0.00019598, 0.00032783]),
'test_score': array([0.49843203, 0.1037956 , 0.44394338]),
'train_score': array([0.515968  , 0.60652691, 0.48251957])}
scores = cross_validate(regress, X, y,cv=10,
...                         scoring=('r2', 'explained_variance'))

print(scores['test_explained_variance'])
print(scores['test_r2'])

[ 0.59735114  0.19408471  0.60665615 -0.40118266  0.53644313  0.40522543
-0.11773814  0.55304218  0.48829582  0.61879053]
[ 0.58662078  0.09910982  0.60312304 -0.42428109  0.53150422  0.40511027
-0.12271601  0.48435444  0.48720066  0.60278384]

scores.keys()

dict_keys(['fit_time', 'train_r2', 'test_r2', 'test_explained_variance', 'train_explained_variance', 'score_time'])
[np.median(scores[key]) for key in ['test_r2','test_explained_variance']]

[0.4857775516193231, 0.5123694726326644]

# Exercise¶

Create a classifier using http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html for the isis dataset and validate the model using 5-fold cross validation and using accuracy and f1 score (as shown here http://scikit-learn.org/stable/modules/model_evaluation.html)