```
!pip install tensorflow
```

```
from __future__ import absolute_import, division, print_function
import tensorflow as tf
```

```
tf.logging.set_verbosity(tf.logging.ERROR)
import numpy as np
```

```
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)
# f = 𝑐×1.8+32
for i,c in enumerate(celsius_q):
print("{} degrees Celsius = {} degrees Fahrenheit".format(c, fahrenheit_a[i]))
```

```
"""
input_shape=[1] — This specifies that the input to this layer is a single value.
That is, the shape is a one-dimensional array with one member.
Since this is the first (and only) layer, that input shape is the
input shape of the entire model. The single value is a floating point number,
representing degrees Celsius.
units=1 — This specifies the number of neurons in the layer.
The number of neurons defines how many internal variables the layer has to try to learn how
to solve the problem (more later). Since this is the final layer,
it is also the size of the model's output — a single float value representing degrees
Fahrenheit. (In a multi-layered network, the size and shape of the layer would
need to match the input_shape of the next layer.)
"""
# no activation function this means activation is linear
layer0 = tf.keras.layers.Dense(units=1, input_shape=[1])
```

```
"""
Once layers are defined, they need to be assembled into a model.
The Sequential model definition takes a list of layers as argument,
specifying the calculation order from the input to the output.
This model has just a single layer, l0.
"""
model = tf.keras.Sequential([layer0])
"""
Before training, the model has to be compiled. When compiled for training, the model is given:
Loss function — A way of measuring how far off predictions are from the desired outcome.
(The measured difference is called the "loss".
Optimizer function — A way of adjusting internal values in order to reduce the loss.
"""
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
```

```
history = model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)
print("Finished training the model")
```

```
import matplotlib.pyplot as plt
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
```

```
print(model.predict([100.0]))
```

```
print ("weights {}". format(layer0.get_weights()))
```

```
print (layer0)
```

```
'''
Now pretednd this is real life and we do not know how many layers thi smodel should have
'''
layer0 = tf.keras.layers.Dense(units=4, input_shape=[1])
layer1 = tf.keras.layers.Dense(units=4)
layer2 = tf.keras.layers.Dense(units=1)
model = tf.keras.Sequential([layer0, layer1, layer2])
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)
print(model.predict([100.0]))
```

```
print("These are the l0 variables: {}".format(layer0.get_weights()))
print("These are the l1 variables: {}".format(layer1.get_weights()))
print("These are the l2 variables: {}".format(layer2.get_weights()))
```

```
```